跳过导航链接
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _ 

A

abort(Throwable) - 接口 中的方法org.apache.spark.shuffle.api.ShuffleMapOutputWriter
Abort all of the writes done by any writers returned by ShuffleMapOutputWriter.getPartitionWriter(int).
abort(WriterCommitMessage[]) - 接口 中的方法org.apache.spark.sql.connector.write.BatchWrite
Aborts this writing job because some data writers are failed and keep failing when retry, or the Spark job fails with some unknown reasons, or BatchWrite.onDataWriterCommit(WriterCommitMessage) fails, or BatchWrite.commit(WriterCommitMessage[]) fails.
abort() - 接口 中的方法org.apache.spark.sql.connector.write.DataWriter
Aborts this writer if it is failed.
abort(long, WriterCommitMessage[]) - 接口 中的方法org.apache.spark.sql.connector.write.streaming.StreamingWrite
Aborts this writing job because some data writers are failed and keep failing when retried, or the Spark job fails with some unknown reasons, or StreamingWrite.commit(long, WriterCommitMessage[]) fails.
abortJob(JobContext) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Aborts a job after the writes fail.
abortJob(JobContext) - 类 中的方法org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
Abort the job; log and ignore any IO exception thrown.
abortStagedChanges() - 接口 中的方法org.apache.spark.sql.connector.catalog.StagedTable
Abort the changes that were staged, both in metadata and from temporary outputs of this table's writers.
abortTask(TaskAttemptContext) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Aborts a task after the writes have failed.
abortTask(TaskAttemptContext) - 类 中的方法org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
Abort the task; log and ignore any failure thrown.
abs(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the absolute value of a numeric value.
abs(T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
abs() - 类 中的方法org.apache.spark.sql.types.Decimal
 
abs(T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
abs(double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
abs(float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
abs(T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
abs(T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
abs(T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
absent() - 类 中的静态方法org.apache.spark.api.java.Optional
 
AbsoluteError - org.apache.spark.mllib.tree.loss中的类
:: DeveloperApi :: Class for absolute error loss calculation (for regression).
AbsoluteError() - 类 的构造器org.apache.spark.mllib.tree.loss.AbsoluteError
 
AbstractLauncher<T extends AbstractLauncher<T>> - org.apache.spark.launcher中的类
Base class for launcher implementations.
accept(Parsers) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
accept(ES, Function1<ES, List<Object>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
accept(String, PartialFunction<Object, U>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
accept(Path) - 类 中的方法org.apache.spark.ml.image.SamplePathFilter
 
acceptIf(Function1<Object, Object>, Function1<Object, String>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
acceptMatch(String, PartialFunction<Object, U>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
acceptSeq(ES, Function1<ES, Iterable<Object>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
acceptsType(DataType) - 类 中的方法org.apache.spark.sql.types.ObjectType
 
accId() - 类 中的方法org.apache.spark.CleanAccum
 
accumCleaned(long) - 接口 中的方法org.apache.spark.CleanerListener
 
AccumulableInfo - org.apache.spark.scheduler中的类
:: DeveloperApi :: Information about an AccumulatorV2 modified during a task or stage.
AccumulableInfo - org.apache.spark.status.api.v1中的类
 
accumulableInfoFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
accumulableInfoToJson(AccumulableInfo) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
accumulables() - 类 中的方法org.apache.spark.scheduler.StageInfo
Terminal values of accumulables updated during this stage, including all the user-defined accumulators.
accumulables() - 类 中的方法org.apache.spark.scheduler.TaskInfo
Intermediate updates to accumulables during this task.
accumulablesToJson(Iterable<AccumulableInfo>) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
AccumulatorContext - org.apache.spark.util中的类
An internal class used to track accumulators by Spark itself.
AccumulatorContext() - 类 的构造器org.apache.spark.util.AccumulatorContext
 
ACCUMULATORS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
accumulatorUpdates() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
accumulatorUpdates() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
AccumulatorV2<IN,OUT> - org.apache.spark.util中的类
The base class for accumulators, that can accumulate inputs of type IN, and produce output of type OUT.
AccumulatorV2() - 类 的构造器org.apache.spark.util.AccumulatorV2
 
accumUpdates() - 类 中的方法org.apache.spark.ExceptionFailure
 
accumUpdates() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
 
accumUpdates() - 类 中的方法org.apache.spark.TaskKilled
 
accuracy() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns accuracy.
accuracy() - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
 
accuracy() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns accuracy
ACLS_ENABLE() - 类 中的静态方法org.apache.spark.internal.config.UI
 
acos(Column) - 类 中的静态方法org.apache.spark.sql.functions
 
acos(String) - 类 中的静态方法org.apache.spark.sql.functions
 
acquire(Seq<String>) - 接口 中的方法org.apache.spark.resource.ResourceAllocator
Acquire a sequence of resource addresses (to a launched task), these addresses must be available.
ActivationFunction - org.apache.spark.ml.ann中的接口
Trait for functions and their derivatives for functional layers
active() - 类 中的静态方法org.apache.spark.sql.SparkSession
Returns the currently active SparkSession, otherwise the default one.
active() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryManager
Returns a list of active queries associated with this SQLContext
active() - 类 中的方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
ACTIVE() - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverState
 
activeStages() - 类 中的方法org.apache.spark.status.LiveJob
 
activeTasks() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
activeTasks() - 类 中的方法org.apache.spark.status.LiveExecutor
 
activeTasks() - 类 中的方法org.apache.spark.status.LiveJob
 
activeTasks() - 类 中的方法org.apache.spark.status.LiveStage
 
activeTasksPerExecutor() - 类 中的方法org.apache.spark.status.LiveStage
 
add(Vector) - 类 中的方法org.apache.spark.ml.clustering.ExpectationAggregator
Add a new training instance to this ExpectationAggregator, update the weights, means and covariances for each distributions, and update the log likelihood.
add(Term) - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
add(Term) - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
add(Term) - 接口 中的方法org.apache.spark.ml.feature.Term
Creates a summation term by concatenation of terms.
add(Datum) - 接口 中的方法org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
Add a single data point to this aggregator.
add(AFTPoint) - 类 中的方法org.apache.spark.ml.regression.AFTAggregator
Add a new training data to this AFTAggregator, and update the loss and gradient of the objective function.
add(double[], MultivariateGaussian[], ExpectationSum, Vector<Object>) - 类 中的静态方法org.apache.spark.mllib.clustering.ExpectationSum
 
add(Vector) - 类 中的方法org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
Adds a new document.
add(BlockMatrix) - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Adds the given block matrix other to this block matrix: this + other.
add(Vector) - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Add a new sample to this summarizer, and update the statistical summary.
add(StructField) - 类 中的方法org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field.
add(String, DataType) - 类 中的方法org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new nullable field with no metadata.
add(String, DataType, boolean) - 类 中的方法org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field with no metadata.
add(String, DataType, boolean, Metadata) - 类 中的方法org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field and specifying metadata.
add(String, DataType, boolean, String) - 类 中的方法org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field and specifying metadata.
add(String, String) - 类 中的方法org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new nullable field with no metadata where the dataType is specified as a String.
add(String, String, boolean) - 类 中的方法org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field with no metadata where the dataType is specified as a String.
add(String, String, boolean, Metadata) - 类 中的方法org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field and specifying metadata where the dataType is specified as a String.
add(String, String, boolean, String) - 类 中的方法org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field and specifying metadata where the dataType is specified as a String.
add(long, long) - 类 中的静态方法org.apache.spark.streaming.util.RawTextHelper
 
add(IN) - 类 中的方法org.apache.spark.util.AccumulatorV2
Takes the inputs and accumulates.
add(T) - 类 中的方法org.apache.spark.util.CollectionAccumulator
 
add(Double) - 类 中的方法org.apache.spark.util.DoubleAccumulator
Adds v to the accumulator, i.e. increment sum by v and count by 1.
add(double) - 类 中的方法org.apache.spark.util.DoubleAccumulator
Adds v to the accumulator, i.e. increment sum by v and count by 1.
add(Long) - 类 中的方法org.apache.spark.util.LongAccumulator
Adds v to the accumulator, i.e. increment sum by v and count by 1.
add(long) - 类 中的方法org.apache.spark.util.LongAccumulator
Adds v to the accumulator, i.e. increment sum by v and count by 1.
add(Object) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Increments item's count by one.
add(Object, long) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Increments item's count by count.
add_months(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Returns the date that is numMonths after startDate.
add_months(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the date that is numMonths after startDate.
addAppArgs(String...) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Adds command line arguments for the application.
addAppArgs(String...) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
addBinary(byte[]) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Increments item's count by one.
addBinary(byte[], long) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Increments item's count by count.
addColumn(String[], DataType) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for adding an optional column.
addColumn(String[], DataType, boolean) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for adding a column.
addColumn(String[], DataType, boolean, String) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for adding a column.
addDirectory(String, File) - 接口 中的方法org.apache.spark.rpc.RpcEnvFileServer
Adds a local directory to be served via this file server.
addFile(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Add a file to be downloaded with this Spark job on every node.
addFile(String, boolean) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Add a file to be downloaded with this Spark job on every node.
addFile(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Adds a file to be submitted with the application.
addFile(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
addFile(File) - 接口 中的方法org.apache.spark.rpc.RpcEnvFileServer
Adds a file to be served by this RpcEnv.
addFile(String) - 类 中的方法org.apache.spark.SparkContext
Add a file to be downloaded with this Spark job on every node.
addFile(String, boolean) - 类 中的方法org.apache.spark.SparkContext
Add a file to be downloaded with this Spark job on every node.
addFilter(ServletContextHandler, String, Map<String, String>) - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
addGrid(Param<T>, Iterable<T>) - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Adds a param with multiple values (overwrites if the input param exists).
addGrid(DoubleParam, double[]) - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Adds a double param with multiple values.
addGrid(IntParam, int[]) - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Adds an int param with multiple values.
addGrid(FloatParam, float[]) - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Adds a float param with multiple values.
addGrid(LongParam, long[]) - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Adds a long param with multiple values.
addGrid(BooleanParam) - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Adds a boolean param with true and false.
addJar(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
addJar(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Adds a jar file to be submitted with the application.
addJar(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
addJar(File) - 接口 中的方法org.apache.spark.rpc.RpcEnvFileServer
Adds a jar to be served by this RpcEnv.
addJar(String) - 类 中的方法org.apache.spark.SparkContext
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
addJar(String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Add a jar into class loader
addJar(String) - 类 中的方法org.apache.spark.sql.hive.HiveSessionResourceLoader
 
addListener(SparkAppHandle.Listener) - 接口 中的方法org.apache.spark.launcher.SparkAppHandle
Adds a listener to be notified of changes to the handle's information.
addListener(StreamingQueryListener) - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryManager
Register a StreamingQueryListener to receive up-calls for life cycle events of StreamingQuery.
addListener(L) - 接口 中的方法org.apache.spark.util.ListenerBus
Add a listener to listen events.
addLocalConfiguration(String, int, int, int, JobConf) - 类 中的静态方法org.apache.spark.rdd.HadoopRDD
Add Hadoop configuration specific to a single partition and attempt.
addLong(long) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Increments item's count by one.
addLong(long, long) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Increments item's count by count.
addMapOutput(int, MapStatus) - 类 中的方法org.apache.spark.ShuffleStatus
Register a map output.
addMetrics(TaskMetrics, TaskMetrics) - 类 中的静态方法org.apache.spark.status.LiveEntityHelpers
Add m2 values to m1.
addPartition(LiveRDDPartition) - 类 中的方法org.apache.spark.status.RDDPartitionSeq
 
addPartToPGroup(Partition, PartitionGroup) - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
addPyFile(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Adds a python file / zip / egg to be submitted with the application.
addPyFile(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
address() - 类 中的方法org.apache.spark.BarrierTaskInfo
 
address() - 类 中的方法org.apache.spark.status.api.v1.RDDDataDistribution
 
addresses() - 类 中的方法org.apache.spark.resource.ResourceInformation
 
addresses() - 类 中的方法org.apache.spark.resource.ResourceInformationJson
 
addSchedulable(Schedulable) - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
addShutdownHook(Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
Adds a shutdown hook with default priority.
addShutdownHook(int, Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
Adds a shutdown hook with the given priority.
addSparkArg(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Adds a no-value argument to the Spark invocation.
addSparkArg(String, String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Adds an argument with a value to the Spark invocation.
addSparkArg(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
addSparkArg(String, String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
addSparkListener(SparkListenerInterface) - 类 中的方法org.apache.spark.SparkContext
:: DeveloperApi :: Register a listener to receive up-calls from events that happen during execution.
addSparkVersionMetadata(RecordWriter<NullWritable, Writable>) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
Add a metadata specifying Spark version.
addStreamingListener(StreamingListener) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Add a StreamingListener object for receiving system events related to streaming.
addStreamingListener(StreamingListener) - 类 中的方法org.apache.spark.streaming.StreamingContext
Add a StreamingListener object for receiving system events related to streaming.
addString(String) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Increments item's count by one.
addString(String, long) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Increments item's count by count.
addTaskCompletionListener(TaskCompletionListener) - 类 中的方法org.apache.spark.BarrierTaskContext
 
addTaskCompletionListener(TaskCompletionListener) - 类 中的方法org.apache.spark.TaskContext
Adds a (Java friendly) listener to be executed on task completion.
addTaskCompletionListener(Function1<TaskContext, U>) - 类 中的方法org.apache.spark.TaskContext
Adds a listener in the form of a Scala closure to be executed on task completion.
addTaskFailureListener(TaskFailureListener) - 类 中的方法org.apache.spark.BarrierTaskContext
 
addTaskFailureListener(TaskFailureListener) - 类 中的方法org.apache.spark.TaskContext
Adds a listener to be executed on task failure.
addTaskFailureListener(Function2<TaskContext, Throwable, BoxedUnit>) - 类 中的方法org.apache.spark.TaskContext
Adds a listener to be executed on task failure.
addTaskSetManager(Schedulable, Properties) - 接口 中的方法org.apache.spark.scheduler.SchedulableBuilder
 
addTime() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
addTime() - 类 中的方法org.apache.spark.status.LiveExecutor
 
addURL(URL) - 类 中的方法org.apache.spark.util.MutableURLClassLoader
 
AddWebUIFilter(String, Map<String, String>, String) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
 
AddWebUIFilter$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter$
 
ADMIN_ACLS() - 类 中的静态方法org.apache.spark.internal.config.UI
 
ADMIN_ACLS_GROUPS() - 类 中的静态方法org.apache.spark.internal.config.UI
 
AFTAggregator - org.apache.spark.ml.regression中的类
AFTAggregator computes the gradient and loss for a AFT loss function, as used in AFT survival regression for samples in sparse or dense vector in an online fashion.
AFTAggregator(Broadcast<DenseVector<Object>>, boolean, Broadcast<double[]>) - 类 的构造器org.apache.spark.ml.regression.AFTAggregator
 
AFTCostFun - org.apache.spark.ml.regression中的类
AFTCostFun implements Breeze's DiffFunction[T] for AFT cost.
AFTCostFun(RDD<AFTPoint>, boolean, Broadcast<double[]>, int) - 类 的构造器org.apache.spark.ml.regression.AFTCostFun
 
AFTSurvivalRegression - org.apache.spark.ml.regression中的类
Fit a parametric survival regression model named accelerated failure time (AFT) model (see Accelerated failure time model (Wikipedia)) based on the Weibull distribution of the survival time.
AFTSurvivalRegression(String) - 类 的构造器org.apache.spark.ml.regression.AFTSurvivalRegression
 
AFTSurvivalRegression() - 类 的构造器org.apache.spark.ml.regression.AFTSurvivalRegression
 
AFTSurvivalRegressionModel - org.apache.spark.ml.regression中的类
Model produced by AFTSurvivalRegression.
AFTSurvivalRegressionParams - org.apache.spark.ml.regression中的接口
Params for accelerated failure time (AFT) regression.
agg(Column, Column...) - 类 中的方法org.apache.spark.sql.Dataset
Aggregates on the entire Dataset without groups.
agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Aggregates on the entire Dataset without groups.
agg(Map<String, String>) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Aggregates on the entire Dataset without groups.
agg(Map<String, String>) - 类 中的方法org.apache.spark.sql.Dataset
(Java-specific) Aggregates on the entire Dataset without groups.
agg(Column, Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Aggregates on the entire Dataset without groups.
agg(TypedColumn<V, U1>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregation, returning a Dataset of tuples for each unique key and the result of computing this aggregation over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>, TypedColumn<V, U8>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(Column, Column...) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute aggregates by specifying a series of aggregate columns.
agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
(Scala-specific) Compute aggregates by specifying the column names and aggregate methods.
agg(Map<String, String>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
(Scala-specific) Compute aggregates by specifying a map from column name to aggregate methods.
agg(Map<String, String>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
(Java-specific) Compute aggregates by specifying a map from column name to aggregate methods.
agg(Column, Seq<Column>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute aggregates by specifying a series of aggregate columns.
aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.RDD
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
aggregate(Column, Column, Function2<Column, Column, Column>, Function1<Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Applies a binary operator to an initial state and all elements in the array, and reduces this to a single state.
aggregate(Column, Column, Function2<Column, Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Applies a binary operator to an initial state and all elements in the array, and reduces this to a single state.
aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Aggregate the values of each key, using given combine functions and a neutral "zero value".
AggregatedDialect - org.apache.spark.sql.jdbc中的类
AggregatedDialect can unify multiple dialects into one virtual Dialect.
AggregatedDialect(List<JdbcDialect>) - 类 的构造器org.apache.spark.sql.jdbc.AggregatedDialect
 
aggregateMessages(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, ClassTag<A>) - 类 中的方法org.apache.spark.graphx.Graph
Aggregates values from the neighboring edges and vertices of each vertex.
aggregateMessagesWithActiveSet(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, Option<Tuple2<VertexRDD<?>, EdgeDirection>>, ClassTag<A>) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.VertexRDD
Aggregates vertices in messages that have the same ids using reduceFunc, returning a VertexRDD co-indexed with this.
AggregatingEdgeContext<VD,ED,A> - org.apache.spark.graphx.impl中的类
 
AggregatingEdgeContext(Function2<A, A, A>, Object, BitSet) - 类 的构造器org.apache.spark.graphx.impl.AggregatingEdgeContext
 
aggregationDepth() - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
aggregationDepth() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
aggregationDepth() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
aggregationDepth() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
aggregationDepth() - 接口 中的方法org.apache.spark.ml.param.shared.HasAggregationDepth
Param for suggested depth for treeAggregate (&gt;= 2).
aggregationDepth() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
aggregationDepth() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
aggregationDepth() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
aggregationDepth() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
Aggregator<K,V,C> - org.apache.spark中的类
:: DeveloperApi :: A set of functions used to aggregate data.
Aggregator(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - 类 的构造器org.apache.spark.Aggregator
 
aggregator() - 类 中的方法org.apache.spark.ShuffleDependency
 
Aggregator<IN,BUF,OUT> - org.apache.spark.sql.expressions中的类
A base class for user-defined aggregations, which can be used in Dataset operations to take all of the elements of a group and reduce them to a single value.
Aggregator() - 类 的构造器org.apache.spark.sql.expressions.Aggregator
 
aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
 
aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
 
aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
 
aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
 
aic() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
Algo - org.apache.spark.mllib.tree.configuration中的类
Enum to select the algorithm for the decision tree
Algo() - 类 的构造器org.apache.spark.mllib.tree.configuration.Algo
 
algo() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
algo() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
 
algo() - 类 中的方法org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
 
algo() - 类 中的方法org.apache.spark.mllib.tree.model.RandomForestModel
 
algorithm() - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
 
alias(String) - 类 中的方法org.apache.spark.sql.Column
Gives the column an alias.
alias(String) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with an alias set.
alias(Symbol) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Returns a new Dataset with an alias set.
All - 类 中的静态变量org.apache.spark.graphx.TripletFields
Expose all the fields (source, edge, and destination).
AllJobsCancelled - org.apache.spark.scheduler中的类
 
AllJobsCancelled() - 类 的构造器org.apache.spark.scheduler.AllJobsCancelled
 
allocator() - 类 中的方法org.apache.spark.storage.memory.SerializedValuesHolder
 
AllReceiverIds - org.apache.spark.streaming.scheduler中的类
A message used by ReceiverTracker to ask all receiver's ids still stored in ReceiverTrackerEndpoint.
AllReceiverIds() - 类 的构造器org.apache.spark.streaming.scheduler.AllReceiverIds
 
allSources() - 类 中的静态方法org.apache.spark.metrics.source.StaticSources
The set of all static sources.
alpha() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
alpha() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Param for the alpha parameter in the implicit preference formulation (nonnegative).
alpha() - 类 中的方法org.apache.spark.mllib.random.WeibullGenerator
 
ALS - org.apache.spark.ml.recommendation中的类
Alternating Least Squares (ALS) matrix factorization.
ALS(String) - 类 的构造器org.apache.spark.ml.recommendation.ALS
 
ALS() - 类 的构造器org.apache.spark.ml.recommendation.ALS
 
ALS - org.apache.spark.mllib.recommendation中的类
Alternating Least Squares matrix factorization.
ALS() - 类 的构造器org.apache.spark.mllib.recommendation.ALS
Constructs an ALS instance with default parameters: {numBlocks: -1, rank: 10, iterations: 10, lambda: 0.01, implicitPrefs: false, alpha: 1.0}.
ALS.InBlock$ - org.apache.spark.ml.recommendation中的类
 
ALS.LeastSquaresNESolver - org.apache.spark.ml.recommendation中的接口
Trait for least squares solvers applied to the normal equation.
ALS.Rating<ID> - org.apache.spark.ml.recommendation中的类
:: DeveloperApi :: Rating class for better code readability.
ALS.Rating$ - org.apache.spark.ml.recommendation中的类
 
ALS.RatingBlock$ - org.apache.spark.ml.recommendation中的类
 
ALSModel - org.apache.spark.ml.recommendation中的类
Model fitted by ALS.
ALSModelParams - org.apache.spark.ml.recommendation中的接口
Common params for ALS and ALSModel.
ALSParams - org.apache.spark.ml.recommendation中的接口
Common params for ALS.
alterDatabase(CatalogDatabase) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Alter a database whose name matches the one specified in database, assuming it exists.
alterFunction(String, CatalogFunction) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Alter a function whose name matches the one specified in `func`, assuming it exists.
alterNamespace(String[], NamespaceChange...) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
alterNamespace(String[], NamespaceChange...) - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsNamespaces
Apply a set of metadata changes to a namespace in the catalog.
alterPartitions(String, String, Seq<CatalogTablePartition>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Alter one or more table partitions whose specs match the ones specified in newParts, assuming the partitions exist.
alterTable(Identifier, TableChange...) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
alterTable(Identifier, TableChange...) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableCatalog
Apply a set of changes to a table in the catalog.
alterTable(CatalogTable) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Alter a table whose name matches the one specified in `table`, assuming it exists.
alterTable(String, String, CatalogTable) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Updates the given table with new metadata, optionally renaming the table or moving across different database.
alterTableDataSchema(String, String, StructType, Map<String, String>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Updates the given table with a new data schema and table properties, and keep everything else unchanged.
AlwaysFalse - org.apache.spark.sql.sources中的类
A filter that always evaluates to false.
AlwaysFalse() - 类 的构造器org.apache.spark.sql.sources.AlwaysFalse
 
AlwaysTrue - org.apache.spark.sql.sources中的类
A filter that always evaluates to true.
AlwaysTrue() - 类 的构造器org.apache.spark.sql.sources.AlwaysTrue
 
am() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
 
AMOUNT() - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
AnalysisException - org.apache.spark.sql中的异常错误
Thrown when a query fails to analyze, usually because the query itself is invalid.
and(Column) - 类 中的方法org.apache.spark.sql.Column
Boolean AND.
And - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff both left or right evaluate to true.
And(Filter, Filter) - 类 的构造器org.apache.spark.sql.sources.And
 
antecedent() - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules.Rule
 
ANY() - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
AnyDataType - org.apache.spark.sql.types中的类
An AbstractDataType that matches any concrete data types.
AnyDataType() - 类 的构造器org.apache.spark.sql.types.AnyDataType
 
anyNull() - 接口 中的方法org.apache.spark.sql.Row
Returns true if there are any NULL values in this row.
anyNull() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
ApiHelper - org.apache.spark.ui.jobs中的类
 
ApiHelper() - 类 的构造器org.apache.spark.ui.jobs.ApiHelper
 
ApiRequestContext - org.apache.spark.status.api.v1中的接口
 
APP_DATA_RETENTION() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
APP_STATUS_METRICS_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.Status
 
appAttemptId() - 类 中的方法org.apache.spark.scheduler.SparkListenerApplicationStart
 
append() - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
Append the contents of the data frame to the output table.
Append() - 类 中的静态方法org.apache.spark.sql.streaming.OutputMode
OutputMode in which only the new rows in the streaming DataFrame/Dataset will be written to the sink.
appendBias(Vector) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Returns a new vector with 1.0 (bias) appended to the input vector.
appendColumn(StructType, String, DataType, boolean) - 类 中的静态方法org.apache.spark.ml.util.SchemaUtils
Appends a new column to the input schema.
appendColumn(StructType, StructField) - 类 中的静态方法org.apache.spark.ml.util.SchemaUtils
Appends a new column to the input schema.
appendReadColumns(Configuration, Seq<Integer>, Seq<String>) - 类 中的静态方法org.apache.spark.sql.hive.HiveShim
 
AppHistoryServerPlugin - org.apache.spark.status中的接口
An interface for creating history listeners(to replay event logs) defined in other modules like SQL, and setup the UI of the plugin to rebuild the history UI.
appId() - 类 中的方法org.apache.spark.scheduler.SparkListenerApplicationStart
 
appId() - 接口 中的方法org.apache.spark.status.api.v1.BaseAppResource
 
APPLICATION_EXECUTOR_LIMIT() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
APPLICATION_MASTER() - 类 中的静态方法org.apache.spark.metrics.MetricsSystemInstances
 
applicationAttemptId() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
Get the attempt ID for this run, if the cluster manager supports multiple attempts.
applicationAttemptId() - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
Get an application's attempt ID associated with the job.
applicationAttemptId() - 类 中的方法org.apache.spark.SparkContext
 
ApplicationAttemptInfo - org.apache.spark.status.api.v1中的类
 
applicationEndFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
applicationEndToJson(SparkListenerApplicationEnd) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
ApplicationEnvironmentInfo - org.apache.spark.status.api.v1中的类
 
applicationId() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
Get an application ID associated with the job.
applicationId() - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
Get an application ID associated with the job.
applicationId() - 类 中的方法org.apache.spark.SparkContext
A unique identifier for the Spark application.
ApplicationInfo - org.apache.spark.status.api.v1中的类
 
APPLICATIONS() - 类 中的静态方法org.apache.spark.metrics.MetricsSystemInstances
 
applicationStartFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
applicationStartToJson(SparkListenerApplicationStart) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
ApplicationStatus - org.apache.spark.status.api.v1中的枚举
 
apply(T1) - 类 中的静态方法org.apache.spark.CleanAccum
 
apply(T1) - 类 中的静态方法org.apache.spark.CleanBroadcast
 
apply(T1) - 类 中的静态方法org.apache.spark.CleanCheckpoint
 
apply(T1) - 类 中的静态方法org.apache.spark.CleanRDD
 
apply(T1) - 类 中的静态方法org.apache.spark.CleanShuffle
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.ContextBarrierId
 
apply(T1, T2, T3, T4, T5, T6, T7, T8) - 类 中的静态方法org.apache.spark.ExceptionFailure
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.ExecutorLostFailure
 
apply(T1) - 类 中的静态方法org.apache.spark.ExecutorRegistered
 
apply(T1) - 类 中的静态方法org.apache.spark.ExecutorRemoved
 
apply(T1, T2, T3, T4, T5, T6) - 类 中的静态方法org.apache.spark.FetchFailed
 
apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.Graph
Construct a graph from a collection of vertices and edges with attributes.
apply(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.impl.GraphImpl
Create a graph from edges, setting referenced vertices to defaultVertexAttr.
apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.impl.GraphImpl
Create a graph from vertices and edges, setting missing vertices to defaultVertexAttr.
apply(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.impl.GraphImpl
Create a graph from a VertexRDD and an EdgeRDD with arbitrary replicated vertices.
apply(Graph<VD, ED>, A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<VD>, ClassTag<ED>, ClassTag<A>) - 类 中的静态方法org.apache.spark.graphx.Pregel
Execute a Pregel-like iterative vertex-parallel abstraction.
apply(RDD<Tuple2<Object, VD>>, ClassTag<VD>) - 类 中的静态方法org.apache.spark.graphx.VertexRDD
Constructs a standalone VertexRDD (one that is not set up for efficient joins with an EdgeRDD) from an RDD of vertex-attribute pairs.
apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, ClassTag<VD>) - 类 中的静态方法org.apache.spark.graphx.VertexRDD
Constructs a VertexRDD from an RDD of vertex-attribute pairs.
apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, Function2<VD, VD, VD>, ClassTag<VD>) - 类 中的静态方法org.apache.spark.graphx.VertexRDD
Constructs a VertexRDD from an RDD of vertex-attribute pairs.
apply(DenseMatrix<Object>, DenseMatrix<Object>, Function1<Object, Object>) - 类 中的静态方法org.apache.spark.ml.ann.ApplyInPlace
 
apply(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>, Function2<Object, Object, Object>) - 类 中的静态方法org.apache.spark.ml.ann.ApplyInPlace
 
apply(String) - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its name.
apply(int) - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its index.
apply(T1, T2) - 类 中的静态方法org.apache.spark.ml.clustering.ClusterData
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.ml.feature.LabeledPoint
 
apply(int, int) - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
apply(int) - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
apply(int, int) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Gets the (i, j)-th element.
apply(int, int) - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
apply(int) - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
apply(int) - 接口 中的方法org.apache.spark.ml.linalg.Vector
Gets the value of the ith element.
apply(Param<T>) - 类 中的方法org.apache.spark.ml.param.ParamMap
Gets the value of the input param or its default value if it does not exist.
apply(GeneralizedLinearRegressionBase) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
Constructs the FamilyAndLink object from a parameter map
apply(T1) - 类 中的静态方法org.apache.spark.ml.SaveInstanceEnd
 
apply(T1) - 类 中的静态方法org.apache.spark.ml.SaveInstanceStart
 
apply() - 类 中的静态方法org.apache.spark.ml.TransformEnd
 
apply() - 类 中的静态方法org.apache.spark.ml.TransformStart
 
apply(Split) - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
 
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
 
apply(BinaryConfusionMatrix) - 接口 中的方法org.apache.spark.mllib.evaluation.binary.BinaryClassificationMetricComputer
 
apply(BinaryConfusionMatrix) - 类 中的静态方法org.apache.spark.mllib.evaluation.binary.FalsePositiveRate
 
apply(BinaryConfusionMatrix) - 类 中的静态方法org.apache.spark.mllib.evaluation.binary.Precision
 
apply(BinaryConfusionMatrix) - 类 中的静态方法org.apache.spark.mllib.evaluation.binary.Recall
 
apply(T1) - 类 中的静态方法org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
 
apply(T1, T2, T3, T4, T5) - 类 中的静态方法org.apache.spark.mllib.feature.VocabWord
 
apply(int, int) - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
apply(int) - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.mllib.linalg.distributed.IndexedRow
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.mllib.linalg.distributed.MatrixEntry
 
apply(int, int) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Gets the (i, j)-th element.
apply(int, int) - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
apply(int) - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
apply(int) - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Gets the value of the ith element.
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.mllib.recommendation.Rating
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.mllib.stat.test.BinarySample
 
apply(int) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.Algo
 
apply(int) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
apply(int) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.FeatureType
 
apply(int) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
apply(int, Node) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
 
apply(Row) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
 
apply(int, Node) - 类 中的静态方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
apply(Row) - 类 中的静态方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
apply(Predict) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
 
apply(Row) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
 
apply(Predict) - 类 中的静态方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
 
apply(Row) - 类 中的静态方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
 
apply(Split) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
 
apply(Row) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
 
apply(Split) - 类 中的静态方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
apply(Row) - 类 中的静态方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
apply(int, Predict, double, boolean) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Construct a node with nodeIndex, predict, impurity and isLeaf parameters.
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.mllib.tree.model.Split
 
apply(int) - 类 中的静态方法org.apache.spark.rdd.CheckpointState
 
apply(int) - 类 中的静态方法org.apache.spark.rdd.DeterministicLevel
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.resource.ResourceInformationJson
 
apply(T1, T2, T3, T4, T5, T6, T7) - 类 中的静态方法org.apache.spark.scheduler.AccumulableInfo
 
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.scheduler.AskPermissionToCommitOutput
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.scheduler.BlacklistedExecutor
 
apply(String, long, Enumeration.Value, ByteBuffer, Map<String, ResourceInformation>) - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
Alternate factory method that takes a ByteBuffer directly for the data field
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.scheduler.local.KillTask
 
apply() - 类 中的静态方法org.apache.spark.scheduler.local.ReviveOffers
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.scheduler.local.StatusUpdate
 
apply() - 类 中的静态方法org.apache.spark.scheduler.local.StopExecutor
 
apply(long, TaskMetrics) - 类 中的静态方法org.apache.spark.scheduler.RuntimePercentage
 
apply(int) - 类 中的静态方法org.apache.spark.scheduler.SchedulingMode
 
apply(T1) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerApplicationEnd
 
apply(T1, T2, T3, T4, T5, T6, T7) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerApplicationStart
 
apply(T1, T2, T3, T4, T5) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
 
apply(T1) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerBlockUpdated
 
apply(T1) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorAdded
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
 
apply(T1, T2, T3, T4, T5) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorRemoved
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerJobEnd
 
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerJobStart
 
apply(T1) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerLogStart
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerNodeBlacklisted
 
apply(T1, T2, T3, T4, T5) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
 
apply(T1) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerStageCompleted
 
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerStageSubmitted
 
apply(T1, T2, T3, T4, T5, T6, T7) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerTaskEnd
 
apply(T1) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerTaskGettingResult
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerTaskStart
 
apply(T1) - 类 中的静态方法org.apache.spark.scheduler.SparkListenerUnpersistRDD
 
apply(int) - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
apply(Object) - 类 中的方法org.apache.spark.sql.Column
Extracts a value or values from a complex type.
apply(String, Expression...) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Expressions
Create a logical transform for applying a named transform.
apply(String, Seq<Expression>) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
apply(String) - 类 中的方法org.apache.spark.sql.Dataset
Selects column based on the column name and returns it as a Column.
apply(LogicalPlan) - 类 中的静态方法org.apache.spark.sql.dynamicpruning.CleanupDynamicPruningFilters
 
apply(LogicalPlan) - 类 中的静态方法org.apache.spark.sql.dynamicpruning.PartitionPruning
 
apply(SparkPlan) - 类 中的方法org.apache.spark.sql.dynamicpruning.PlanDynamicPruningFilters
 
apply(Column...) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Creates a Column for this UDAF using given Columns as input arguments.
apply(Seq<Column>) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Creates a Column for this UDAF using given Columns as input arguments.
apply(Column...) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedFunction
Returns an expression that invokes the UDF, using the given arguments.
apply(Seq<Column>) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedFunction
Returns an expression that invokes the UDF, using the given arguments.
apply(LogicalPlan) - 类 中的方法org.apache.spark.sql.hive.DetermineTableStats
 
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
apply(ScriptInputOutputSchema) - 类 中的静态方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
apply(T1, T2, T3, T4, T5) - 类 中的静态方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
apply(T1, T2, T3, T4, T5, T6) - 类 中的静态方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 
apply(T1, T2, T3, T4, T5) - 类 中的静态方法org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
apply(LogicalPlan) - 类 中的静态方法org.apache.spark.sql.hive.HiveAnalysis
 
apply(LogicalPlan) - 类 中的方法org.apache.spark.sql.hive.HiveStrategies.HiveTableScans$
 
apply(LogicalPlan) - 类 中的静态方法org.apache.spark.sql.hive.HiveStrategies.HiveTableScans
 
apply(LogicalPlan) - 类 中的方法org.apache.spark.sql.hive.HiveStrategies.Scripts$
 
apply(LogicalPlan) - 类 中的静态方法org.apache.spark.sql.hive.HiveStrategies.Scripts
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.hive.HiveUDAFBuffer
 
apply(LogicalPlan) - 类 中的方法org.apache.spark.sql.hive.RelationConversions
 
apply(LogicalPlan) - 类 中的方法org.apache.spark.sql.hive.ResolveHiveSerdeTable
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.jdbc.JdbcType
 
apply(Dataset<Row>, Seq<Expression>, RelationalGroupedDataset.GroupType) - 类 中的静态方法org.apache.spark.sql.RelationalGroupedDataset
 
apply(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i.
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.And
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.EqualNullSafe
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.EqualTo
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.GreaterThan
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.GreaterThanOrEqual
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.In
 
apply(T1) - 类 中的静态方法org.apache.spark.sql.sources.IsNotNull
 
apply(T1) - 类 中的静态方法org.apache.spark.sql.sources.IsNull
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.LessThan
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.LessThanOrEqual
 
apply(T1) - 类 中的静态方法org.apache.spark.sql.sources.Not
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.Or
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.StringContains
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.StringEndsWith
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.sql.sources.StringStartsWith
 
apply(String, Option<Object>) - 类 中的静态方法org.apache.spark.sql.streaming.SinkProgress
 
apply(DataType) - 类 中的静态方法org.apache.spark.sql.types.ArrayType
Construct a ArrayType object with the given element type.
apply(T1) - 类 中的静态方法org.apache.spark.sql.types.CharType
 
apply(double) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(long) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(int) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(BigDecimal) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(BigDecimal) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(BigInteger) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(BigInt) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(BigDecimal, int, int) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(BigDecimal, int, int) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(long, int, int) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(String) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
apply(DataType, DataType) - 类 中的静态方法org.apache.spark.sql.types.MapType
Construct a MapType object with the given key type and value type.
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.sql.types.StructField
 
apply(String) - 类 中的方法org.apache.spark.sql.types.StructType
Extracts the StructField with the given name.
apply(Set<String>) - 类 中的方法org.apache.spark.sql.types.StructType
Returns a StructType containing StructFields of the given names, preserving the original order of fields.
apply(int) - 类 中的方法org.apache.spark.sql.types.StructType
 
apply(T1) - 类 中的静态方法org.apache.spark.sql.types.VarcharType
 
apply(T1, T2, T3, T4, T5, T6, T7, T8) - 类 中的静态方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
apply(T1, T2, T3, T4, T5, T6, T7) - 类 中的静态方法org.apache.spark.status.api.v1.ApplicationInfo
 
apply(T1) - 类 中的静态方法org.apache.spark.status.api.v1.StackTrace
 
apply(T1, T2, T3, T4, T5, T6, T7) - 类 中的静态方法org.apache.spark.status.api.v1.ThreadStackTrace
 
apply(int) - 类 中的方法org.apache.spark.status.RDDPartitionSeq
 
apply(String) - 类 中的静态方法org.apache.spark.storage.BlockId
 
apply(String, String, int, Option<String>) - 类 中的静态方法org.apache.spark.storage.BlockManagerId
Returns a BlockManagerId for the given configuration.
apply(ObjectInput) - 类 中的静态方法org.apache.spark.storage.BlockManagerId
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.storage.BroadcastBlockId
 
apply(T1, T2) - 类 中的静态方法org.apache.spark.storage.RDDBlockId
 
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.storage.ShuffleBlockBatchId
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.storage.ShuffleBlockId
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.storage.ShuffleDataBlockId
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.storage.ShuffleIndexBlockId
 
apply(boolean, boolean, boolean, boolean, int) - 类 中的静态方法org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Create a new StorageLevel object.
apply(boolean, boolean, boolean, int) - 类 中的静态方法org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Create a new StorageLevel object without setting useOffHeap.
apply(int, int) - 类 中的静态方法org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Create a new StorageLevel object from its integer representation.
apply(ObjectInput) - 类 中的静态方法org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Read StorageLevel object from ObjectInput stream.
apply(T1, T2) - 类 中的静态方法org.apache.spark.storage.StreamBlockId
 
apply(T1) - 类 中的静态方法org.apache.spark.storage.TaskResultBlockId
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.Duration
 
apply(long) - 类 中的静态方法org.apache.spark.streaming.Milliseconds
 
apply(long) - 类 中的静态方法org.apache.spark.streaming.Minutes
 
apply(T1, T2, T3, T4, T5, T6) - 类 中的静态方法org.apache.spark.streaming.scheduler.BatchInfo
 
apply(T1, T2, T3, T4, T5, T6, T7) - 类 中的静态方法org.apache.spark.streaming.scheduler.OutputOperationInfo
 
apply(T1, T2, T3, T4, T5, T6, T7, T8) - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
apply(int) - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverState
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
 
apply(T1) - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
 
apply(long) - 类 中的静态方法org.apache.spark.streaming.Seconds
 
apply(T1, T2, T3) - 类 中的静态方法org.apache.spark.TaskCommitDenied
 
apply(T1, T2, T3, T4) - 类 中的静态方法org.apache.spark.TaskKilled
 
apply(int) - 类 中的静态方法org.apache.spark.TaskState
 
apply(TraversableOnce<Object>) - 类 中的静态方法org.apache.spark.util.StatCounter
Build a StatCounter from a list of values.
apply(Seq<Object>) - 类 中的静态方法org.apache.spark.util.StatCounter
Build a StatCounter from a list of values passed as variable-length arguments.
APPLY_CUSTOM_EXECUTOR_LOG_URL_TO_INCOMPLETE_APP() - 类 中的静态方法org.apache.spark.internal.config.History
 
ApplyInPlace - org.apache.spark.ml.ann中的类
Implements in-place application of functions in the arrays
ApplyInPlace() - 类 的构造器org.apache.spark.ml.ann.ApplyInPlace
 
applyNamespaceChanges(Map<String, String>, Seq<NamespaceChange>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply properties changes to a map and return the result.
applyNamespaceChanges(Map<String, String>, Seq<NamespaceChange>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply properties changes to a Java map and return the result.
applyPropertiesChanges(Map<String, String>, Seq<TableChange>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply properties changes to a map and return the result.
applyPropertiesChanges(Map<String, String>, Seq<TableChange>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply properties changes to a Java map and return the result.
applySchemaChanges(StructType, Seq<TableChange>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply schema changes to a schema and return the result.
appName() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
appName() - 类 中的方法org.apache.spark.scheduler.SparkListenerApplicationStart
 
appName() - 类 中的方法org.apache.spark.SparkContext
 
appName(String) - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Sets a name for the application, which will be shown in the Spark web UI.
approx_count_distinct(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approx_count_distinct(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approx_count_distinct(Column, double) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approx_count_distinct(String, double) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
ApproxHist() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
ApproximateEvaluator<U,R> - org.apache.spark.partial中的接口
An object that computes a function incrementally by merging in results of type U from multiple tasks.
approxQuantile(String, double[], double) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Calculates the approximate quantiles of a numerical column of a DataFrame.
approxQuantile(String[], double[], double) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Calculates the approximate quantiles of numerical columns of a DataFrame.
appSparkVersion() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
AppStatusUtils - org.apache.spark.status中的类
 
AppStatusUtils() - 类 的构造器org.apache.spark.status.AppStatusUtils
 
AreaUnderCurve - org.apache.spark.mllib.evaluation中的类
Computes the area under the curve (AUC) using the trapezoidal rule.
AreaUnderCurve() - 类 的构造器org.apache.spark.mllib.evaluation.AreaUnderCurve
 
areaUnderPR() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Computes the area under the precision-recall curve.
areaUnderROC() - 接口 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
Computes the area under the receiver operating characteristic (ROC) curve.
areaUnderROC() - 类 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
 
areaUnderROC() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Computes the area under the receiver operating characteristic (ROC) curve.
argmax() - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
argmax() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
argmax() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Find the index of a maximal element.
argmax() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
argmax() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
argmax() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Find the index of a maximal element.
argString(int) - 接口 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectBase
 
arguments() - 接口 中的方法org.apache.spark.sql.connector.expressions.Transform
Returns the arguments passed to the transform function.
array(DataType) - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type array.
array(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new array column.
array(String, String...) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new array column.
array(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new array column.
array(String, Seq<String>) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new array column.
array() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
array_contains(Column, Object) - 类 中的静态方法org.apache.spark.sql.functions
Returns null if the array is null, true if the array contains value, and false otherwise.
array_distinct(Column) - 类 中的静态方法org.apache.spark.sql.functions
Removes duplicate values from the array.
array_except(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns an array of the elements in the first array but not in the second array, without duplicates.
array_intersect(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns an array of the elements in the intersection of the given two arrays, without duplicates.
array_join(Column, String, String) - 类 中的静态方法org.apache.spark.sql.functions
Concatenates the elements of column using the delimiter.
array_join(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Concatenates the elements of column using the delimiter.
array_max(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the maximum value in the array.
array_min(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the minimum value in the array.
array_position(Column, Object) - 类 中的静态方法org.apache.spark.sql.functions
Locates the position of the first occurrence of the value in the given array as long.
array_remove(Column, Object) - 类 中的静态方法org.apache.spark.sql.functions
Remove all elements that equal to element from the given array.
array_repeat(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Creates an array containing the left argument repeated the number of times given by the right argument.
array_repeat(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Creates an array containing the left argument repeated the number of times given by the right argument.
array_sort(Column) - 类 中的静态方法org.apache.spark.sql.functions
Sorts the input array in ascending order.
array_union(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns an array of the elements in the union of the given two arrays, without duplicates.
arrayLengthGt(double) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Check that the array length is greater than lowerBound.
arrays_overlap(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns true if a1 and a2 have at least one non-null element in common.
arrays_zip(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Returns a merged array of structs in which the N-th struct contains all N-th values of input arrays.
arrays_zip(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns a merged array of structs in which the N-th struct contains all N-th values of input arrays.
ArrayType - org.apache.spark.sql.types中的类
 
ArrayType(DataType, boolean) - 类 的构造器org.apache.spark.sql.types.ArrayType
 
arrayValues() - 类 中的方法org.apache.spark.storage.memory.DeserializedValuesHolder
 
ArrowColumnVector - org.apache.spark.sql.vectorized中的类
A column vector backed by Apache Arrow.
ArrowColumnVector(ValueVector) - 类 的构造器org.apache.spark.sql.vectorized.ArrowColumnVector
 
ArrowUtils - org.apache.spark.sql.util中的类
 
ArrowUtils() - 类 的构造器org.apache.spark.sql.util.ArrowUtils
 
as(Encoder<U>) - 类 中的方法org.apache.spark.sql.Column
Provides a type hint about the expected return value of this column.
as(String) - 类 中的方法org.apache.spark.sql.Column
Gives the column an alias.
as(Seq<String>) - 类 中的方法org.apache.spark.sql.Column
(Scala-specific) Assigns the given aliases to the results of a table generating function.
as(String[]) - 类 中的方法org.apache.spark.sql.Column
Assigns the given aliases to the results of a table generating function.
as(Symbol) - 类 中的方法org.apache.spark.sql.Column
Gives the column an alias.
as(String, Metadata) - 类 中的方法org.apache.spark.sql.Column
Gives the column an alias with metadata.
as(Encoder<U>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset where each record has been mapped on to the specified type.
as(String) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with an alias set.
as(Symbol) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Returns a new Dataset with an alias set.
asBinary() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Convenient method for casting to binary logistic regression summary.
asBreeze() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts to a breeze matrix.
asBreeze() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Converts the instance to a breeze vector.
asBreeze() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Converts to a breeze matrix.
asBreeze() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Converts the instance to a breeze vector.
asc() - 类 中的方法org.apache.spark.sql.Column
Returns a sort expression based on ascending order of the column.
asc(String) - 类 中的静态方法org.apache.spark.sql.functions
Returns a sort expression based on ascending order of the column.
asc_nulls_first() - 类 中的方法org.apache.spark.sql.Column
Returns a sort expression based on ascending order of the column, and null values return before non-null values.
asc_nulls_first(String) - 类 中的静态方法org.apache.spark.sql.functions
Returns a sort expression based on ascending order of the column, and null values return before non-null values.
asc_nulls_last() - 类 中的方法org.apache.spark.sql.Column
Returns a sort expression based on ascending order of the column, and null values appear after non-null values.
asc_nulls_last(String) - 类 中的静态方法org.apache.spark.sql.functions
Returns a sort expression based on ascending order of the column, and null values appear after non-null values.
asCaseSensitiveMap() - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
Returns the original case-sensitive map.
ascii(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the numeric value of the first character of the string column, and returns the result as an int column.
asIdentifier() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
 
asin(Column) - 类 中的静态方法org.apache.spark.sql.functions
 
asin(String) - 类 中的静态方法org.apache.spark.sql.functions
 
asInteraction() - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
asInteraction() - 接口 中的方法org.apache.spark.ml.feature.InteractableTerm
Convert to ColumnInteraction to wrap all interactions.
asIterator() - 类 中的方法org.apache.spark.serializer.DeserializationStream
Read the elements of this stream through an iterator.
asJavaPairRDD() - 类 中的方法org.apache.spark.api.r.PairwiseRRDD
 
asJavaRDD() - 类 中的方法org.apache.spark.api.r.RRDD
 
asJavaRDD() - 类 中的方法org.apache.spark.api.r.StringRRDD
 
asKeyValueIterator() - 类 中的方法org.apache.spark.serializer.DeserializationStream
Read the elements of this stream through an iterator over key-value pairs.
AskPermissionToCommitOutput - org.apache.spark.scheduler中的类
 
AskPermissionToCommitOutput(int, int, int, int) - 类 的构造器org.apache.spark.scheduler.AskPermissionToCommitOutput
 
askRpcTimeout(SparkConf) - 类 中的静态方法org.apache.spark.util.RpcUtils
Returns the default Spark timeout to use for RPC ask operations.
askSlaves() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
 
askSlaves() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
 
asML() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
asML() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
asML() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Convert this matrix to the new mllib-local representation.
asML() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
asML() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
asML() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Convert this vector to the new mllib-local representation.
asNamespaceCatalog() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
 
asNondeterministic() - 类 中的方法org.apache.spark.sql.expressions.UserDefinedFunction
Updates UserDefinedFunction to nondeterministic.
asNonNullable() - 类 中的方法org.apache.spark.sql.expressions.UserDefinedFunction
Updates UserDefinedFunction to non-nullable.
asNullable() - 类 中的方法org.apache.spark.sql.types.ObjectType
 
asPartitionColumns() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.TransformHelper
 
asRDDId() - 类 中的方法org.apache.spark.storage.BlockId
 
assertConf(JobContext, SparkConf) - 类 中的方法org.apache.spark.internal.io.HadoopWriteConfigUtil
 
assertExceptionMsg(Throwable, String) - 类 中的静态方法org.apache.spark.TestUtils
Asserts that exception message contains the message.
assertNotSpilled(SparkContext, String, Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.TestUtils
Run some code involving jobs submitted to the given context and assert that the jobs did not spill.
assertSpilled(SparkContext, String, Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.TestUtils
Run some code involving jobs submitted to the given context and assert that the jobs spilled.
assignClusters(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
Run the PIC algorithm and returns a cluster assignment for each input vertex.
assignedAddrs() - 接口 中的方法org.apache.spark.resource.ResourceAllocator
Sequence of currently assigned resource addresses.
Assignment(long, int) - 类 的构造器org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
 
Assignment$() - 类 的构造器org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment$
 
assignments() - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClusteringModel
 
AssociationRules - org.apache.spark.ml.fpm中的类
 
AssociationRules() - 类 的构造器org.apache.spark.ml.fpm.AssociationRules
 
associationRules() - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
Get association rules fitted using the minConfidence.
AssociationRules - org.apache.spark.mllib.fpm中的类
Generates association rules from a RDD[FreqItemset[Item}.
AssociationRules() - 类 的构造器org.apache.spark.mllib.fpm.AssociationRules
Constructs a default instance with default parameters {minConfidence = 0.8}.
AssociationRules.Rule<Item> - org.apache.spark.mllib.fpm中的类
An association rule between sets of items.
asTableCatalog() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
 
asTableIdentifier() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
 
AsTableIdentifier() - 接口 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog
 
AsTableIdentifier() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier
 
AsTableIdentifier$() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier$
 
AsTemporaryViewIdentifier() - 接口 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog
 
AsTemporaryViewIdentifier() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.AsTemporaryViewIdentifier
 
AsTemporaryViewIdentifier$() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.AsTemporaryViewIdentifier$
 
asTerms() - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
asTerms() - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
asTerms() - 接口 中的方法org.apache.spark.ml.feature.Term
Default representation of a single Term as a part of summed terms.
asTransform() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.BucketSpecHelper
 
asTransforms() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.PartitionTypeHelper
 
ASYNC_TRACKING_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.Status
 
AsyncEventQueue - org.apache.spark.scheduler中的类
An asynchronous queue for events.
AsyncEventQueue(String, SparkConf, LiveListenerBusMetrics, LiveListenerBus) - 类 的构造器org.apache.spark.scheduler.AsyncEventQueue
 
AsyncRDDActions<T> - org.apache.spark.rdd中的类
A set of asynchronous RDD actions available through an implicit conversion.
AsyncRDDActions(RDD<T>, ClassTag<T>) - 类 的构造器org.apache.spark.rdd.AsyncRDDActions
 
atan(Column) - 类 中的静态方法org.apache.spark.sql.functions
 
atan(String) - 类 中的静态方法org.apache.spark.sql.functions
 
atan2(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
 
atan2(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
 
atan2(String, Column) - 类 中的静态方法org.apache.spark.sql.functions
 
atan2(String, String) - 类 中的静态方法org.apache.spark.sql.functions
 
atan2(Column, double) - 类 中的静态方法org.apache.spark.sql.functions
 
atan2(String, double) - 类 中的静态方法org.apache.spark.sql.functions
 
atan2(double, Column) - 类 中的静态方法org.apache.spark.sql.functions
 
atan2(double, String) - 类 中的静态方法org.apache.spark.sql.functions
 
attempt() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
ATTEMPT() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
attemptId() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
attemptId() - 接口 中的方法org.apache.spark.status.api.v1.BaseAppResource
 
attemptId() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
attemptNumber() - 类 中的方法org.apache.spark.BarrierTaskContext
 
attemptNumber() - 类 中的方法org.apache.spark.scheduler.AskPermissionToCommitOutput
 
attemptNumber() - 类 中的方法org.apache.spark.scheduler.StageInfo
 
attemptNumber() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
attemptNumber() - 类 中的方法org.apache.spark.TaskCommitDenied
 
attemptNumber() - 类 中的方法org.apache.spark.TaskContext
How many times this task has been attempted.
attempts() - 类 中的方法org.apache.spark.status.api.v1.ApplicationInfo
 
AtTimestamp(Date) - 类 的构造器org.apache.spark.streaming.kinesis.KinesisInitialPositions.AtTimestamp
 
attr() - 类 中的方法org.apache.spark.graphx.Edge
 
attr() - 类 中的方法org.apache.spark.graphx.EdgeContext
The attribute associated with the edge.
attr() - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
Attribute - org.apache.spark.ml.attribute中的类
:: DeveloperApi :: Abstract class for ML attributes.
Attribute() - 类 的构造器org.apache.spark.ml.attribute.Attribute
 
attribute() - 类 中的方法org.apache.spark.sql.sources.EqualNullSafe
 
attribute() - 类 中的方法org.apache.spark.sql.sources.EqualTo
 
attribute() - 类 中的方法org.apache.spark.sql.sources.GreaterThan
 
attribute() - 类 中的方法org.apache.spark.sql.sources.GreaterThanOrEqual
 
attribute() - 类 中的方法org.apache.spark.sql.sources.In
 
attribute() - 类 中的方法org.apache.spark.sql.sources.IsNotNull
 
attribute() - 类 中的方法org.apache.spark.sql.sources.IsNull
 
attribute() - 类 中的方法org.apache.spark.sql.sources.LessThan
 
attribute() - 类 中的方法org.apache.spark.sql.sources.LessThanOrEqual
 
attribute() - 类 中的方法org.apache.spark.sql.sources.StringContains
 
attribute() - 类 中的方法org.apache.spark.sql.sources.StringEndsWith
 
attribute() - 类 中的方法org.apache.spark.sql.sources.StringStartsWith
 
AttributeFactory - org.apache.spark.ml.attribute中的接口
Trait for ML attribute factories.
AttributeGroup - org.apache.spark.ml.attribute中的类
:: DeveloperApi :: Attributes that describe a vector ML column.
AttributeGroup(String) - 类 的构造器org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group without attribute info.
AttributeGroup(String, int) - 类 的构造器org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group knowing only the number of attributes.
AttributeGroup(String, Attribute[]) - 类 的构造器org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group with attributes.
AttributeKeys - org.apache.spark.ml.attribute中的类
Keys used to store attributes.
AttributeKeys() - 类 的构造器org.apache.spark.ml.attribute.AttributeKeys
 
attributes() - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Optional array of attributes.
ATTRIBUTES() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
attributes() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
 
attributes() - 类 中的方法org.apache.spark.scheduler.cluster.ExecutorInfo
 
attributes() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
attributes() - 类 中的方法org.apache.spark.status.LiveExecutor
 
AttributeType - org.apache.spark.ml.attribute中的类
:: DeveloperApi :: An enum-like type for attribute types: AttributeType$.Numeric, AttributeType$.Nominal, and AttributeType$.Binary.
AttributeType(String) - 类 的构造器org.apache.spark.ml.attribute.AttributeType
 
attrType() - 类 中的方法org.apache.spark.ml.attribute.Attribute
Attribute type.
attrType() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
attrType() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
attrType() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
attrType() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
available() - 类 中的方法org.apache.spark.io.NioBufferedFileInputStream
 
available() - 类 中的方法org.apache.spark.io.ReadAheadInputStream
 
available() - 类 中的方法org.apache.spark.storage.BufferReleasingInputStream
 
availableAddrs() - 接口 中的方法org.apache.spark.resource.ResourceAllocator
Sequence of currently available resource addresses.
Average() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
avg(MapFunction<T, Double>) - 类 中的静态方法org.apache.spark.sql.expressions.javalang.typed
已过时。
Average aggregate function.
avg(Function1<IN, Object>) - 类 中的静态方法org.apache.spark.sql.expressions.scalalang.typed
已过时。
Average aggregate function.
avg(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the average of the values in a group.
avg(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the average of the values in a group.
avg(String...) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the mean value for each numeric columns for each group.
avg(Seq<String>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the mean value for each numeric columns for each group.
avg() - 类 中的方法org.apache.spark.util.DoubleAccumulator
Returns the average of elements added to the accumulator.
avg() - 类 中的方法org.apache.spark.util.LongAccumulator
Returns the average of elements added to the accumulator.
avgEventRate() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
avgInputRate() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
avgMetrics() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
avgProcessingTime() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
avgSchedulingDelay() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
avgTotalDelay() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
awaitAnyTermination() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryManager
Wait until any of the queries on the associated SQLContext has terminated since the creation of the context, or since resetTerminated() was called.
awaitAnyTermination(long) - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryManager
Wait until any of the queries on the associated SQLContext has terminated since the creation of the context, or since resetTerminated() was called.
awaitReady(Awaitable<T>, Duration) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Preferred alternative to Await.ready().
awaitResult(Awaitable<T>, Duration) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Preferred alternative to Await.result().
awaitTermination() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Waits for the termination of this query, either by query.stop() or by an exception.
awaitTermination(long) - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Waits for the termination of this query, either by query.stop() or by an exception.
awaitTermination() - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Wait for the execution to stop.
awaitTermination() - 类 中的方法org.apache.spark.streaming.StreamingContext
Wait for the execution to stop.
awaitTerminationOrTimeout(long) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Wait for the execution to stop.
awaitTerminationOrTimeout(long) - 类 中的方法org.apache.spark.streaming.StreamingContext
Wait for the execution to stop.
axpy(double, Vector, Vector) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
y += a * x
axpy(double, Vector, Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
y += a * x

B

BACKUP_STANDALONE_MASTER_PREFIX() - 类 中的静态方法org.apache.spark.util.Utils
An identifier that backup masters use in their responses.
balanceSlack() - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
barrier() - 类 中的方法org.apache.spark.BarrierTaskContext
:: Experimental :: Sets a global barrier and waits until all tasks in this stage hit this barrier.
barrier() - 类 中的方法org.apache.spark.rdd.RDD
:: Experimental :: Marks the current stage as a barrier stage, where Spark must launch all tasks together.
BarrierCoordinatorMessage - org.apache.spark中的接口
 
BarrierTaskContext - org.apache.spark中的类
:: Experimental :: A TaskContext with extra contextual info and tooling for tasks in a barrier stage.
BarrierTaskInfo - org.apache.spark中的类
:: Experimental :: Carries all task infos of a barrier task.
base64(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the BASE64 encoding of a binary column and returns it as a string column.
BaseAppResource - org.apache.spark.status.api.v1中的接口
Base class for resource handlers that use app-specific data.
baseOn(ParamPair<?>...) - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Sets the given parameters in this grid to fixed values.
baseOn(ParamMap) - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Sets the given parameters in this grid to fixed values.
baseOn(Seq<ParamPair<?>>) - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Sets the given parameters in this grid to fixed values.
BaseReadWrite - org.apache.spark.ml.util中的接口
Trait for MLWriter and MLReader.
BaseRelation - org.apache.spark.sql.sources中的类
Represents a collection of tuples with a known schema.
BaseRelation() - 类 的构造器org.apache.spark.sql.sources.BaseRelation
 
baseRelationToDataFrame(BaseRelation) - 类 中的方法org.apache.spark.sql.SparkSession
Convert a BaseRelation created for external data sources into a DataFrame.
baseRelationToDataFrame(BaseRelation) - 类 中的方法org.apache.spark.sql.SQLContext
Convert a BaseRelation created for external data sources into a DataFrame.
BaseRRDD<T,U> - org.apache.spark.api.r中的类
 
BaseRRDD(RDD<T>, int, byte[], String, String, byte[], Broadcast<Object>[], ClassTag<T>, ClassTag<U>) - 类 的构造器org.apache.spark.api.r.BaseRRDD
 
BaseStreamingAppResource - org.apache.spark.status.api.v1.streaming中的接口
Base class for streaming API handlers, provides easy access to the streaming listener that holds the app's information.
BasicBlockReplicationPolicy - org.apache.spark.storage中的类
 
BasicBlockReplicationPolicy() - 类 的构造器org.apache.spark.storage.BasicBlockReplicationPolicy
 
basicCredentials(String, String) - 类 中的方法org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
Use a basic AWS keypair for long-lived authorization.
basicSparkPage(HttpServletRequest, Function0<Seq<Node>>, String, boolean) - 类 中的静态方法org.apache.spark.ui.UIUtils
Returns a page with the spark css/js and a simple format.
Batch - org.apache.spark.sql.connector.read中的接口
A physical representation of a data source scan for batch queries.
batchDuration() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
batchDuration() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
BATCHES() - 类 中的静态方法org.apache.spark.mllib.clustering.StreamingKMeans
 
batchId() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
batchId() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
BatchInfo - org.apache.spark.status.api.v1.streaming中的类
 
BatchInfo - org.apache.spark.streaming.scheduler中的类
:: DeveloperApi :: Class having information on completed batches.
BatchInfo(Time, Map<Object, StreamInputInfo>, long, Option<Object>, Option<Object>, Map<Object, OutputOperationInfo>) - 类 的构造器org.apache.spark.streaming.scheduler.BatchInfo
 
batchInfo() - 类 中的方法org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
 
batchInfo() - 类 中的方法org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
 
batchInfo() - 类 中的方法org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
 
batchInfos() - 类 中的方法org.apache.spark.streaming.scheduler.StatsReportListener
 
BatchStatus - org.apache.spark.status.api.v1.streaming中的枚举
 
batchTime() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
batchTime() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
 
batchTime() - 类 中的方法org.apache.spark.streaming.scheduler.OutputOperationInfo
 
BatchWrite - org.apache.spark.sql.connector.write中的接口
An interface that defines how to write the data to data source for batch processing.
bbos() - 类 中的方法org.apache.spark.storage.memory.SerializedValuesHolder
 
bean(Class<T>) - 类 中的静态方法org.apache.spark.sql.Encoders
Creates an encoder for Java Bean of type T.
beforeFetch(Connection, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
beforeFetch(Connection, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
beforeFetch(Connection, Map<String, String>) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
Override connection specific properties to run before a select is made.
beforeFetch(Connection, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
beforeFetch(Connection, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
beforeFetch(Connection, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
beforeFetch(Connection, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
beforeFetch(Connection, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
beforeFetch(Connection, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
BernoulliCellSampler<T> - org.apache.spark.util.random中的类
:: DeveloperApi :: A sampler based on Bernoulli trials for partitioning a data sequence.
BernoulliCellSampler(double, double, boolean) - 类 的构造器org.apache.spark.util.random.BernoulliCellSampler
 
BernoulliSampler<T> - org.apache.spark.util.random中的类
:: DeveloperApi :: A sampler based on Bernoulli trials.
BernoulliSampler(double, ClassTag<T>) - 类 的构造器org.apache.spark.util.random.BernoulliSampler
 
bestModel() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
bestModel() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
beta() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
beta() - 类 中的方法org.apache.spark.mllib.random.WeibullGenerator
 
between(Object, Object) - 类 中的方法org.apache.spark.sql.Column
True if the current column is between the lower bound and upper bound, inclusive.
bin(Column) - 类 中的静态方法org.apache.spark.sql.functions
An expression that returns the string representation of the binary value of the given long column.
bin(String) - 类 中的静态方法org.apache.spark.sql.functions
An expression that returns the string representation of the binary value of the given long column.
Binarizer - org.apache.spark.ml.feature中的类
Binarize a column of continuous features given a threshold.
Binarizer(String) - 类 的构造器org.apache.spark.ml.feature.Binarizer
 
Binarizer() - 类 的构造器org.apache.spark.ml.feature.Binarizer
 
Binary() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeType
Binary type.
binary() - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
binary() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
binary() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
Binary toggle to control the output vector values.
binary() - 类 中的方法org.apache.spark.ml.feature.HashingTF
Binary toggle to control term frequency counts.
binary() - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type binary.
BINARY() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for arrays of bytes.
BinaryAttribute - org.apache.spark.ml.attribute中的类
:: DeveloperApi :: A binary attribute.
BinaryClassificationEvaluator - org.apache.spark.ml.evaluation中的类
Evaluator for binary classification, which expects two input columns: rawPrediction and label.
BinaryClassificationEvaluator(String) - 类 的构造器org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
BinaryClassificationEvaluator() - 类 的构造器org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
BinaryClassificationMetricComputer - org.apache.spark.mllib.evaluation.binary中的接口
Trait for a binary classification evaluation metric computer.
BinaryClassificationMetrics - org.apache.spark.mllib.evaluation中的类
Evaluator for binary classification.
BinaryClassificationMetrics(RDD<? extends Product>, int) - 类 的构造器org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
 
BinaryClassificationMetrics(RDD<Tuple2<Object, Object>>) - 类 的构造器org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Defaults numBins to 0.
BinaryConfusionMatrix - org.apache.spark.mllib.evaluation.binary中的接口
Trait for a binary confusion matrix.
binaryFiles(String, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.
binaryFiles(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.
binaryFiles(String, int) - 类 中的方法org.apache.spark.SparkContext
Get an RDD for a Hadoop-readable dataset as PortableDataStream for each file (useful for binary data) For example, if you have the following files: hdfs://a-hdfs-path/part-00000 hdfs://a-hdfs-path/part-00001 ...
binaryLabelValidator() - 类 中的静态方法org.apache.spark.mllib.util.DataValidators
Function to check if labels used for classification are either zero or one.
BinaryLogisticRegressionSummary - org.apache.spark.ml.classification中的接口
Abstraction for binary logistic regression results for a given model.
BinaryLogisticRegressionSummaryImpl - org.apache.spark.ml.classification中的类
Binary logistic regression results for a given model.
BinaryLogisticRegressionSummaryImpl(Dataset<Row>, String, String, String, String) - 类 的构造器org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
 
BinaryLogisticRegressionTrainingSummary - org.apache.spark.ml.classification中的接口
Abstraction for binary logistic regression training results.
BinaryLogisticRegressionTrainingSummaryImpl - org.apache.spark.ml.classification中的类
Binary logistic regression training results.
BinaryLogisticRegressionTrainingSummaryImpl(Dataset<Row>, String, String, String, String, double[]) - 类 的构造器org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummaryImpl
 
binaryRecords(String, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Load data from a flat binary file, assuming the length of each record is constant.
binaryRecords(String, int, Configuration) - 类 中的方法org.apache.spark.SparkContext
Load data from a flat binary file, assuming the length of each record is constant.
binaryRecordsStream(String, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as flat binary files with fixed record lengths, yielding byte arrays
binaryRecordsStream(String, int) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as flat binary files, assuming a fixed length per record, generating one byte array per record.
BinarySample - org.apache.spark.mllib.stat.test中的类
Class that represents the group and value of a sample.
BinarySample(boolean, double) - 类 的构造器org.apache.spark.mllib.stat.test.BinarySample
 
binarySummary() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
Gets summary of model on training set.
BinaryType - org.apache.spark.sql.types中的类
The data type representing Array[Byte] values.
BinaryType() - 类 的构造器org.apache.spark.sql.types.BinaryType
 
BinaryType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the BinaryType object.
Binomial$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
 
BinomialBounds - org.apache.spark.util.random中的类
Utility functions that help us determine bounds on adjusted sampling rate to guarantee exact sample size with high confidence when sampling without replacement.
BinomialBounds() - 类 的构造器org.apache.spark.util.random.BinomialBounds
 
BisectingKMeans - org.apache.spark.ml.clustering中的类
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques" by Steinbach, Karypis, and Kumar, with modification to fit Spark.
BisectingKMeans(String) - 类 的构造器org.apache.spark.ml.clustering.BisectingKMeans
 
BisectingKMeans() - 类 的构造器org.apache.spark.ml.clustering.BisectingKMeans
 
BisectingKMeans - org.apache.spark.mllib.clustering中的类
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques" by Steinbach, Karypis, and Kumar, with modification to fit Spark.
BisectingKMeans() - 类 的构造器org.apache.spark.mllib.clustering.BisectingKMeans
Constructs with the default configuration
BisectingKMeansModel - org.apache.spark.ml.clustering中的类
Model fitted by BisectingKMeans.
BisectingKMeansModel - org.apache.spark.mllib.clustering中的类
Clustering model produced by BisectingKMeans.
BisectingKMeansModel(ClusteringTreeNode) - 类 的构造器org.apache.spark.mllib.clustering.BisectingKMeansModel
 
BisectingKMeansModel.SaveLoadV1_0$ - org.apache.spark.mllib.clustering中的类
 
BisectingKMeansModel.SaveLoadV2_0$ - org.apache.spark.mllib.clustering中的类
 
BisectingKMeansModel.SaveLoadV3_0$ - org.apache.spark.mllib.clustering中的类
 
BisectingKMeansParams - org.apache.spark.ml.clustering中的接口
Common params for BisectingKMeans and BisectingKMeansModel
BisectingKMeansSummary - org.apache.spark.ml.clustering中的类
Summary of BisectingKMeans.
bitSize() - 类 中的方法org.apache.spark.util.sketch.BloomFilter
Returns the number of bits in the underlying bit array.
bitwiseAND(Object) - 类 中的方法org.apache.spark.sql.Column
Compute bitwise AND of this expression with another expression.
bitwiseNOT(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes bitwise NOT (~) of a number.
bitwiseOR(Object) - 类 中的方法org.apache.spark.sql.Column
Compute bitwise OR of this expression with another expression.
bitwiseXOR(Object) - 类 中的方法org.apache.spark.sql.Column
Compute bitwise XOR of this expression with another expression.
BLACKLISTED() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
BlacklistedExecutor - org.apache.spark.scheduler中的类
 
BlacklistedExecutor(String, long) - 类 的构造器org.apache.spark.scheduler.BlacklistedExecutor
 
blackListedExecutors() - 类 中的方法org.apache.spark.status.LiveStage
 
blacklistedInStages() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
blacklistedInStages() - 类 中的方法org.apache.spark.status.LiveExecutor
 
BLAS - org.apache.spark.ml.linalg中的类
BLAS routines for MLlib's vectors and matrices.
BLAS() - 类 的构造器org.apache.spark.ml.linalg.BLAS
 
BLAS - org.apache.spark.mllib.linalg中的类
BLAS routines for MLlib's vectors and matrices.
BLAS() - 类 的构造器org.apache.spark.mllib.linalg.BLAS
 
BlockData - org.apache.spark.storage中的接口
Abstracts away how blocks are stored and provides different ways to read the underlying block data.
blockedByLock() - 类 中的方法org.apache.spark.status.api.v1.ThreadStackTrace
 
blockedByThreadId() - 类 中的方法org.apache.spark.status.api.v1.ThreadStackTrace
 
BlockEvictionHandler - org.apache.spark.storage.memory中的接口
 
BlockGeneratorListener - org.apache.spark.streaming.receiver中的接口
Listener object for BlockGenerator events
BlockId - org.apache.spark.storage中的类
:: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file.
BlockId() - 类 的构造器org.apache.spark.storage.BlockId
 
blockId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
 
blockId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetLocations
 
blockId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
 
blockId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RemoveBlock
 
blockId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
 
blockId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
blockId() - 类 中的方法org.apache.spark.storage.BlockUpdatedInfo
 
blockId() - 接口 中的方法org.apache.spark.streaming.receiver.ReceivedBlockStoreResult
 
blockIds() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
 
BlockLocationsAndStatus(Seq<BlockManagerId>, BlockStatus, Option<String[]>) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
 
BlockLocationsAndStatus$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus$
 
blockManager() - 类 中的方法org.apache.spark.SparkEnv
 
blockManagerAddedFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
blockManagerAddedToJson(SparkListenerBlockManagerAdded) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
BlockManagerHeartbeat(BlockManagerId) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
 
BlockManagerHeartbeat$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat$
 
blockManagerId() - 类 中的方法org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
blockManagerId() - 类 中的方法org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
 
BlockManagerId - org.apache.spark.storage中的类
:: DeveloperApi :: This class represent a unique identifier for a BlockManager.
BlockManagerId() - 类 的构造器org.apache.spark.storage.BlockManagerId
 
blockManagerId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
 
blockManagerId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetPeers
 
blockManagerId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
 
blockManagerId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
blockManagerId() - 类 中的方法org.apache.spark.storage.BlockUpdatedInfo
 
blockManagerIdCache() - 类 中的静态方法org.apache.spark.storage.BlockManagerId
The max cache size is hardcoded to 10000, since the size of a BlockManagerId object is about 48B, the total memory cost should be below 1MB which is feasible.
blockManagerIdFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
blockManagerIdToJson(BlockManagerId) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
BlockManagerMessages - org.apache.spark.storage中的类
 
BlockManagerMessages() - 类 的构造器org.apache.spark.storage.BlockManagerMessages
 
BlockManagerMessages.BlockLocationsAndStatus - org.apache.spark.storage中的类
The response message of GetLocationsAndStatus request.
BlockManagerMessages.BlockLocationsAndStatus$ - org.apache.spark.storage中的类
 
BlockManagerMessages.BlockManagerHeartbeat - org.apache.spark.storage中的类
 
BlockManagerMessages.BlockManagerHeartbeat$ - org.apache.spark.storage中的类
 
BlockManagerMessages.GetBlockStatus - org.apache.spark.storage中的类
 
BlockManagerMessages.GetBlockStatus$ - org.apache.spark.storage中的类
 
BlockManagerMessages.GetExecutorEndpointRef - org.apache.spark.storage中的类
 
BlockManagerMessages.GetExecutorEndpointRef$ - org.apache.spark.storage中的类
 
BlockManagerMessages.GetLocations - org.apache.spark.storage中的类
 
BlockManagerMessages.GetLocations$ - org.apache.spark.storage中的类
 
BlockManagerMessages.GetLocationsAndStatus - org.apache.spark.storage中的类
 
BlockManagerMessages.GetLocationsAndStatus$ - org.apache.spark.storage中的类
 
BlockManagerMessages.GetLocationsMultipleBlockIds - org.apache.spark.storage中的类
 
BlockManagerMessages.GetLocationsMultipleBlockIds$ - org.apache.spark.storage中的类
 
BlockManagerMessages.GetMatchingBlockIds - org.apache.spark.storage中的类
 
BlockManagerMessages.GetMatchingBlockIds$ - org.apache.spark.storage中的类
 
BlockManagerMessages.GetMemoryStatus$ - org.apache.spark.storage中的类
 
BlockManagerMessages.GetPeers - org.apache.spark.storage中的类
 
BlockManagerMessages.GetPeers$ - org.apache.spark.storage中的类
 
BlockManagerMessages.GetStorageStatus$ - org.apache.spark.storage中的类
 
BlockManagerMessages.IsExecutorAlive - org.apache.spark.storage中的类
 
BlockManagerMessages.IsExecutorAlive$ - org.apache.spark.storage中的类
 
BlockManagerMessages.RegisterBlockManager - org.apache.spark.storage中的类
 
BlockManagerMessages.RegisterBlockManager$ - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveBlock - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveBlock$ - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveBroadcast - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveBroadcast$ - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveExecutor - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveExecutor$ - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveRdd - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveRdd$ - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveShuffle - org.apache.spark.storage中的类
 
BlockManagerMessages.RemoveShuffle$ - org.apache.spark.storage中的类
 
BlockManagerMessages.ReplicateBlock - org.apache.spark.storage中的类
 
BlockManagerMessages.ReplicateBlock$ - org.apache.spark.storage中的类
 
BlockManagerMessages.StopBlockManagerMaster$ - org.apache.spark.storage中的类
 
BlockManagerMessages.ToBlockManagerMaster - org.apache.spark.storage中的接口
 
BlockManagerMessages.ToBlockManagerSlave - org.apache.spark.storage中的接口
 
BlockManagerMessages.TriggerThreadDump$ - org.apache.spark.storage中的类
Driver to Executor message to trigger a thread dump.
BlockManagerMessages.UpdateBlockInfo - org.apache.spark.storage中的类
 
BlockManagerMessages.UpdateBlockInfo$ - org.apache.spark.storage中的类
 
blockManagerRemovedFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
blockManagerRemovedToJson(SparkListenerBlockManagerRemoved) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
BlockMatrix - org.apache.spark.mllib.linalg.distributed中的类
Represents a distributed matrix in blocks of local matrices.
BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int, long, long) - 类 的构造器org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int) - 类 的构造器org.apache.spark.mllib.linalg.distributed.BlockMatrix
Alternate constructor for BlockMatrix without the input of the number of rows and columns.
blockName() - 类 中的方法org.apache.spark.status.api.v1.RDDPartitionInfo
 
blockName() - 类 中的方法org.apache.spark.status.LiveRDDPartition
 
BlockNotFoundException - org.apache.spark.storage中的异常错误
 
BlockNotFoundException(String) - 异常错误 的构造器org.apache.spark.storage.BlockNotFoundException
 
BlockReplicationPolicy - org.apache.spark.storage中的接口
::DeveloperApi:: BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for replicating blocks.
BlockReplicationUtils - org.apache.spark.storage中的类
 
BlockReplicationUtils() - 类 的构造器org.apache.spark.storage.BlockReplicationUtils
 
blocks() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
blockSize() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
blockSize() - 接口 中的方法org.apache.spark.ml.classification.MultilayerPerceptronParams
Block size for stacking input data in matrices to speed up the computation.
BlockStatus - org.apache.spark.storage中的类
 
BlockStatus(StorageLevel, long, long) - 类 的构造器org.apache.spark.storage.BlockStatus
 
blockStatusFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
blockStatusToJson(BlockStatus) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
blockUpdatedInfo() - 类 中的方法org.apache.spark.scheduler.SparkListenerBlockUpdated
 
BlockUpdatedInfo - org.apache.spark.storage中的类
:: DeveloperApi :: Stores information about a block status in a block manager.
BlockUpdatedInfo(BlockManagerId, BlockId, StorageLevel, long, long) - 类 的构造器org.apache.spark.storage.BlockUpdatedInfo
 
blockUpdatedInfoFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
blockUpdatedInfoToJson(BlockUpdatedInfo) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
blockUpdateFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
blockUpdateToJson(SparkListenerBlockUpdated) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
bloomFilter(String, long, double) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Builds a Bloom filter over a specified column.
bloomFilter(Column, long, double) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Builds a Bloom filter over a specified column.
bloomFilter(String, long, long) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Builds a Bloom filter over a specified column.
bloomFilter(Column, long, long) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Builds a Bloom filter over a specified column.
BloomFilter - org.apache.spark.util.sketch中的类
A Bloom filter is a space-efficient probabilistic data structure that offers an approximate containment test with one-sided error: if it claims that an item is contained in it, this might be in error, but if it claims that an item is not contained in it, then this is definitely true.
BloomFilter() - 类 的构造器org.apache.spark.util.sketch.BloomFilter
 
BloomFilter.Version - org.apache.spark.util.sketch中的枚举
 
bmAddress() - 类 中的方法org.apache.spark.FetchFailed
 
BOOLEAN() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable boolean type.
BooleanParam - org.apache.spark.ml.param中的类
:: DeveloperApi :: Specialized version of Param[Boolean] for Java.
BooleanParam(String, String, String) - 类 的构造器org.apache.spark.ml.param.BooleanParam
 
BooleanParam(Identifiable, String, String) - 类 的构造器org.apache.spark.ml.param.BooleanParam
 
BooleanType - org.apache.spark.sql.types中的类
The data type representing Boolean values.
BooleanType() - 类 的构造器org.apache.spark.sql.types.BooleanType
 
BooleanType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the BooleanType object.
boost(RDD<org.apache.spark.ml.feature.Instance>, RDD<org.apache.spark.ml.feature.Instance>, BoostingStrategy, boolean, long, String) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
Internal method for performing regression using trees as base learners.
BoostingStrategy - org.apache.spark.mllib.tree.configuration中的类
Configuration options for GradientBoostedTrees.
BoostingStrategy(Strategy, Loss, int, double, double) - 类 的构造器org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
Both() - 类 中的静态方法org.apache.spark.graphx.EdgeDirection
Edges originating from *and* arriving at a vertex of interest.
boundaries() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
Boundaries in increasing order for which predictions are known.
boundaries() - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegressionModel
 
BoundedDouble - org.apache.spark.partial中的类
A Double value with error bars and associated confidence.
BoundedDouble(double, double, double, double) - 类 的构造器org.apache.spark.partial.BoundedDouble
 
BreezeUtil - org.apache.spark.ml.ann中的类
In-place DGEMM and DGEMV for Breeze
BreezeUtil() - 类 的构造器org.apache.spark.ml.ann.BreezeUtil
 
broadcast(T) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
Broadcast<T> - org.apache.spark.broadcast中的类
A broadcast variable.
Broadcast(long, ClassTag<T>) - 类 的构造器org.apache.spark.broadcast.Broadcast
 
broadcast(T, ClassTag<T>) - 类 中的方法org.apache.spark.SparkContext
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
broadcast(Dataset<T>) - 类 中的静态方法org.apache.spark.sql.functions
Marks a DataFrame as small enough for use in broadcast joins.
BROADCAST() - 类 中的静态方法org.apache.spark.storage.BlockId
 
BroadcastBlockId - org.apache.spark.storage中的类
 
BroadcastBlockId(long, String) - 类 的构造器org.apache.spark.storage.BroadcastBlockId
 
broadcastCleaned(long) - 接口 中的方法org.apache.spark.CleanerListener
 
BroadcastFactory - org.apache.spark.broadcast中的接口
An interface for all the broadcast implementations in Spark (to allow multiple broadcast implementations).
broadcastId() - 类 中的方法org.apache.spark.CleanBroadcast
 
broadcastId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
 
broadcastId() - 类 中的方法org.apache.spark.storage.BroadcastBlockId
 
broadcastManager() - 类 中的方法org.apache.spark.SparkEnv
 
bround(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the column e rounded to 0 decimal places with HALF_EVEN round mode.
bround(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Round the value of e to scale decimal places with HALF_EVEN round mode if scale is greater than or equal to 0 or at integral part when scale is less than 0.
bucket(int, String...) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Expressions
Create a bucket transform for one or more columns.
bucket(int, Seq<String>) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
bucket(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
A transform for any type that partitions by a hash of the input column.
bucket(int, Column) - 类 中的静态方法org.apache.spark.sql.functions
A transform for any type that partitions by a hash of the input column.
bucketBy(int, String, String...) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Buckets the output by the given columns.
bucketBy(int, String, Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Buckets the output by the given columns.
BucketedRandomProjectionLSH - org.apache.spark.ml.feature中的类
This BucketedRandomProjectionLSH implements Locality Sensitive Hashing functions for Euclidean distance metrics.
BucketedRandomProjectionLSH(String) - 类 的构造器org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
BucketedRandomProjectionLSH() - 类 的构造器org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
BucketedRandomProjectionLSHModel - org.apache.spark.ml.feature中的类
Model produced by BucketedRandomProjectionLSH, where multiple random vectors are stored.
BucketedRandomProjectionLSHParams - org.apache.spark.ml.feature中的接口
Bucketizer - org.apache.spark.ml.feature中的类
Bucketizer maps a column of continuous features to a column of feature buckets.
Bucketizer(String) - 类 的构造器org.apache.spark.ml.feature.Bucketizer
 
Bucketizer() - 类 的构造器org.apache.spark.ml.feature.Bucketizer
 
bucketLength() - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
bucketLength() - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
bucketLength() - 接口 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHParams
The length of each hash bucket, a larger bucket lowers the false negative rate.
BucketSpecHelper(BucketSpec) - 类 的构造器org.apache.spark.sql.connector.catalog.CatalogV2Implicits.BucketSpecHelper
 
buf() - 类 中的方法org.apache.spark.sql.hive.HiveUDAFBuffer
 
buffer() - 类 中的方法org.apache.spark.storage.memory.SerializedMemoryEntry
 
bufferEncoder() - 类 中的方法org.apache.spark.ml.feature.StringIndexerAggregator
 
bufferEncoder() - 类 中的方法org.apache.spark.sql.expressions.Aggregator
Specifies the Encoder for the intermediate value type.
BufferReleasingInputStream - org.apache.spark.storage中的类
Helper class that ensures a ManagedBuffer is released upon InputStream.close() and also detects stream corruption if streamCompressedOrEncrypted is true
BufferReleasingInputStream(InputStream, ShuffleBlockFetcherIterator, BlockId, int, BlockManagerId, boolean) - 类 的构造器org.apache.spark.storage.BufferReleasingInputStream
 
bufferSchema() - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
A StructType represents data types of values in the aggregation buffer.
build(Node, int) - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
Create DecisionTreeModelReadWrite.NodeData instances for this node and all children.
build(DecisionTreeModel, int) - 类 中的方法org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
Create EnsembleModelReadWrite.EnsembleNodeData instances for the given tree.
build() - 类 中的方法org.apache.spark.ml.tuning.ParamGridBuilder
Builds and returns all combinations of parameters specified by the param grid.
build() - 接口 中的方法org.apache.spark.sql.connector.read.ScanBuilder
 
build() - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Builds the Metadata instance.
build() - 接口 中的方法org.apache.spark.storage.memory.MemoryEntryBuilder
 
build() - 类 中的方法org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
Returns the appropriate instance of SparkAWSCredentials given the configured parameters.
builder() - 类 中的静态方法org.apache.spark.sql.SparkSession
Creates a SparkSession.Builder for constructing a SparkSession.
Builder() - 类 的构造器org.apache.spark.sql.SparkSession.Builder
 
Builder() - 类 的构造器org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
 
buildErrorResponse(Response.Status, String) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
buildForBatch() - 接口 中的方法org.apache.spark.sql.connector.write.V1WriteBuilder
 
buildForBatch() - 接口 中的方法org.apache.spark.sql.connector.write.WriteBuilder
Returns a BatchWrite to write data to batch source.
buildForStreaming() - 接口 中的方法org.apache.spark.sql.connector.write.V1WriteBuilder
 
buildForStreaming() - 接口 中的方法org.apache.spark.sql.connector.write.WriteBuilder
Returns a StreamingWrite to write data to streaming source.
buildForV1Write() - 接口 中的方法org.apache.spark.sql.connector.write.V1WriteBuilder
Creates an InsertableRelation that allows appending a DataFrame to a a destination (using data source-specific parameters).
buildPools() - 接口 中的方法org.apache.spark.scheduler.SchedulableBuilder
 
buildReader(SparkSession, StructType, StructType, StructType, Seq<Filter>, Map<String, String>, Configuration) - 类 中的方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
buildScan(Seq<Attribute>, Seq<Expression>) - 接口 中的方法org.apache.spark.sql.sources.CatalystScan
 
buildScan(String[], Filter[]) - 接口 中的方法org.apache.spark.sql.sources.PrunedFilteredScan
 
buildScan(String[]) - 接口 中的方法org.apache.spark.sql.sources.PrunedScan
 
buildScan() - 接口 中的方法org.apache.spark.sql.sources.TableScan
 
buildTreeFromNodes(DecisionTreeModelReadWrite.NodeData[], String) - 类 中的静态方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite
Given all data for all nodes in a tree, rebuild the tree.
builtinHiveVersion() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
The version of hive used internally by Spark SQL.
BYTE() - 类 中的静态方法org.apache.spark.api.r.SerializationFormats
 
BYTE() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable byte type.
BytecodeUtils - org.apache.spark.graphx.util中的类
Includes an utility function to test whether a function accesses a specific attribute of an object.
BytecodeUtils() - 类 的构造器org.apache.spark.graphx.util.BytecodeUtils
 
ByteExactNumeric - org.apache.spark.sql.types中的类
 
ByteExactNumeric() - 类 的构造器org.apache.spark.sql.types.ByteExactNumeric
 
byteFromString(String, ByteUnit) - 类 中的静态方法org.apache.spark.internal.config.ConfigHelpers
 
BYTES_READ() - 类 中的方法org.apache.spark.InternalAccumulator.input$
 
BYTES_WRITTEN() - 类 中的方法org.apache.spark.InternalAccumulator.output$
 
BYTES_WRITTEN() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleWrite$
 
bytesRead() - 类 中的方法org.apache.spark.status.api.v1.InputMetricDistributions
 
bytesRead() - 类 中的方法org.apache.spark.status.api.v1.InputMetrics
 
bytesToString(long) - 类 中的静态方法org.apache.spark.util.Utils
Convert a quantity in bytes to a human-readable string such as "4.0 MiB".
bytesToString(BigInt) - 类 中的静态方法org.apache.spark.util.Utils
 
byteStringAsBytes(String) - 类 中的静态方法org.apache.spark.util.Utils
Convert a passed byte string (e.g. 50b, 100k, or 250m) to bytes for internal use.
byteStringAsGb(String) - 类 中的静态方法org.apache.spark.util.Utils
Convert a passed byte string (e.g. 50b, 100k, or 250m, 500g) to gibibytes for internal use.
byteStringAsKb(String) - 类 中的静态方法org.apache.spark.util.Utils
Convert a passed byte string (e.g. 50b, 100k, or 250m) to kibibytes for internal use.
byteStringAsMb(String) - 类 中的静态方法org.apache.spark.util.Utils
Convert a passed byte string (e.g. 50b, 100k, or 250m) to mebibytes for internal use.
bytesWritten() - 类 中的方法org.apache.spark.status.api.v1.OutputMetricDistributions
 
bytesWritten() - 类 中的方法org.apache.spark.status.api.v1.OutputMetrics
 
bytesWritten() - 类 中的方法org.apache.spark.status.api.v1.ShuffleWriteMetrics
 
bytesWritten(long) - 接口 中的方法org.apache.spark.util.logging.RollingPolicy
Notify that bytes have been written
byteToString(long, ByteUnit) - 类 中的静态方法org.apache.spark.internal.config.ConfigHelpers
 
ByteType - org.apache.spark.sql.types中的类
The data type representing Byte values.
ByteType() - 类 的构造器org.apache.spark.sql.types.ByteType
 
ByteType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the ByteType object.

C

cache() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Persist this RDD with the default storage level (MEMORY_ONLY).
cache() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Persist this RDD with the default storage level (MEMORY_ONLY).
cache() - 类 中的方法org.apache.spark.api.java.JavaRDD
Persist this RDD with the default storage level (MEMORY_ONLY).
cache() - 类 中的方法org.apache.spark.graphx.Graph
Caches the vertices and edges associated with this graph at the previously-specified target storage levels, which default to MEMORY_ONLY.
cache() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
Persists the edge partitions using targetStorageLevel, which defaults to MEMORY_ONLY.
cache() - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
cache() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
Persists the vertex partitions at targetStorageLevel, which defaults to MEMORY_ONLY.
cache() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Caches the underlying RDD.
cache() - 类 中的方法org.apache.spark.rdd.RDD
Persist this RDD with the default storage level (MEMORY_ONLY).
cache() - 类 中的方法org.apache.spark.sql.Dataset
Persist this Dataset with the default storage level (MEMORY_AND_DISK).
cache() - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
cache() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
cache() - 类 中的方法org.apache.spark.streaming.dstream.DStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
CACHED_PARTITIONS() - 类 中的静态方法org.apache.spark.ui.storage.ToolTips
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
cacheNodeIds() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
cacheNodeIds() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
If false, the algorithm will pass trees to executors to match instances with nodes.
cacheSize() - 接口 中的方法org.apache.spark.SparkExecutorInfo
 
cacheSize() - 类 中的方法org.apache.spark.SparkExecutorInfoImpl
 
cacheTable(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Caches the specified table in-memory.
cacheTable(String, StorageLevel) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Caches the specified table with the given storage level.
cacheTable(String) - 类 中的方法org.apache.spark.sql.SQLContext
Caches the specified table in-memory.
calculate(DenseVector<Object>) - 类 中的方法org.apache.spark.ml.regression.AFTCostFun
 
calculate(double[], double) - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Entropy
:: DeveloperApi :: information calculation for multiclass classification
calculate(double, double, double) - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Entropy
:: DeveloperApi :: variance calculation
calculate(double[], double) - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Gini
:: DeveloperApi :: information calculation for multiclass classification
calculate(double, double, double) - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Gini
:: DeveloperApi :: variance calculation
calculate(double[], double) - 接口 中的方法org.apache.spark.mllib.tree.impurity.Impurity
:: DeveloperApi :: information calculation for multiclass classification
calculate(double, double, double) - 接口 中的方法org.apache.spark.mllib.tree.impurity.Impurity
:: DeveloperApi :: information calculation for regression
calculate(double[], double) - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Variance
:: DeveloperApi :: information calculation for multiclass classification
calculate(double, double, double) - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Variance
:: DeveloperApi :: variance calculation
calculateNumberOfPartitions(long, int, int) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
Calculate the number of partitions to use in saving the model.
CalendarIntervalType - org.apache.spark.sql.types中的类
The data type representing calendar time intervals.
CalendarIntervalType() - 类 的构造器org.apache.spark.sql.types.CalendarIntervalType
 
CalendarIntervalType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the CalendarIntervalType object.
call(K, Iterator<V1>, Iterator<V2>) - 接口 中的方法org.apache.spark.api.java.function.CoGroupFunction
 
call(T) - 接口 中的方法org.apache.spark.api.java.function.DoubleFlatMapFunction
 
call(T) - 接口 中的方法org.apache.spark.api.java.function.DoubleFunction
 
call(T) - 接口 中的方法org.apache.spark.api.java.function.FilterFunction
 
call(T) - 接口 中的方法org.apache.spark.api.java.function.FlatMapFunction
 
call(T1, T2) - 接口 中的方法org.apache.spark.api.java.function.FlatMapFunction2
 
call(K, Iterator<V>) - 接口 中的方法org.apache.spark.api.java.function.FlatMapGroupsFunction
 
call(K, Iterator<V>, GroupState<S>) - 接口 中的方法org.apache.spark.api.java.function.FlatMapGroupsWithStateFunction
 
call(T) - 接口 中的方法org.apache.spark.api.java.function.ForeachFunction
 
call(Iterator<T>) - 接口 中的方法org.apache.spark.api.java.function.ForeachPartitionFunction
 
call(T1) - 接口 中的方法org.apache.spark.api.java.function.Function
 
call() - 接口 中的方法org.apache.spark.api.java.function.Function0
 
call(T1, T2) - 接口 中的方法org.apache.spark.api.java.function.Function2
 
call(T1, T2, T3) - 接口 中的方法org.apache.spark.api.java.function.Function3
 
call(T1, T2, T3, T4) - 接口 中的方法org.apache.spark.api.java.function.Function4
 
call(T) - 接口 中的方法org.apache.spark.api.java.function.MapFunction
 
call(K, Iterator<V>) - 接口 中的方法org.apache.spark.api.java.function.MapGroupsFunction
 
call(K, Iterator<V>, GroupState<S>) - 接口 中的方法org.apache.spark.api.java.function.MapGroupsWithStateFunction
 
call(Iterator<T>) - 接口 中的方法org.apache.spark.api.java.function.MapPartitionsFunction
 
call(T) - 接口 中的方法org.apache.spark.api.java.function.PairFlatMapFunction
 
call(T) - 接口 中的方法org.apache.spark.api.java.function.PairFunction
 
call(T, T) - 接口 中的方法org.apache.spark.api.java.function.ReduceFunction
 
call(T) - 接口 中的方法org.apache.spark.api.java.function.VoidFunction
 
call(T1, T2) - 接口 中的方法org.apache.spark.api.java.function.VoidFunction2
 
call() - 接口 中的方法org.apache.spark.sql.api.java.UDF0
 
call(T1) - 接口 中的方法org.apache.spark.sql.api.java.UDF1
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10) - 接口 中的方法org.apache.spark.sql.api.java.UDF10
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11) - 接口 中的方法org.apache.spark.sql.api.java.UDF11
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12) - 接口 中的方法org.apache.spark.sql.api.java.UDF12
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13) - 接口 中的方法org.apache.spark.sql.api.java.UDF13
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14) - 接口 中的方法org.apache.spark.sql.api.java.UDF14
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15) - 接口 中的方法org.apache.spark.sql.api.java.UDF15
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16) - 接口 中的方法org.apache.spark.sql.api.java.UDF16
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17) - 接口 中的方法org.apache.spark.sql.api.java.UDF17
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18) - 接口 中的方法org.apache.spark.sql.api.java.UDF18
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19) - 接口 中的方法org.apache.spark.sql.api.java.UDF19
 
call(T1, T2) - 接口 中的方法org.apache.spark.sql.api.java.UDF2
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20) - 接口 中的方法org.apache.spark.sql.api.java.UDF20
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21) - 接口 中的方法org.apache.spark.sql.api.java.UDF21
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22) - 接口 中的方法org.apache.spark.sql.api.java.UDF22
 
call(T1, T2, T3) - 接口 中的方法org.apache.spark.sql.api.java.UDF3
 
call(T1, T2, T3, T4) - 接口 中的方法org.apache.spark.sql.api.java.UDF4
 
call(T1, T2, T3, T4, T5) - 接口 中的方法org.apache.spark.sql.api.java.UDF5
 
call(T1, T2, T3, T4, T5, T6) - 接口 中的方法org.apache.spark.sql.api.java.UDF6
 
call(T1, T2, T3, T4, T5, T6, T7) - 接口 中的方法org.apache.spark.sql.api.java.UDF7
 
call(T1, T2, T3, T4, T5, T6, T7, T8) - 接口 中的方法org.apache.spark.sql.api.java.UDF8
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9) - 接口 中的方法org.apache.spark.sql.api.java.UDF9
 
callSite() - 类 中的方法org.apache.spark.storage.RDDInfo
 
callUDF(String, Column...) - 类 中的静态方法org.apache.spark.sql.functions
Call an user-defined function.
callUDF(String, Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Call an user-defined function.
cancel() - 类 中的方法org.apache.spark.ComplexFutureAction
 
cancel() - 接口 中的方法org.apache.spark.FutureAction
Cancels the execution of this action.
cancel() - 类 中的方法org.apache.spark.SimpleFutureAction
 
cancelAllJobs() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Cancel all jobs that have been scheduled or are running.
cancelAllJobs() - 类 中的方法org.apache.spark.SparkContext
Cancel all jobs that have been scheduled or are running.
cancelJob(int, String) - 类 中的方法org.apache.spark.SparkContext
Cancel a given job if it's scheduled or running.
cancelJob(int) - 类 中的方法org.apache.spark.SparkContext
Cancel a given job if it's scheduled or running.
cancelJobGroup(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Cancel active jobs for the specified group.
cancelJobGroup(String) - 类 中的方法org.apache.spark.SparkContext
Cancel active jobs for the specified group.
cancelStage(int, String) - 类 中的方法org.apache.spark.SparkContext
Cancel a given stage and all jobs associated with it.
cancelStage(int) - 类 中的方法org.apache.spark.SparkContext
Cancel a given stage and all jobs associated with it.
cancelTasks(int, boolean) - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
canCreate(String) - 接口 中的方法org.apache.spark.scheduler.ExternalClusterManager
Check if this cluster manager instance can create scheduler components for a certain master URL.
canDoMerge() - 类 中的方法org.apache.spark.sql.hive.HiveUDAFBuffer
 
canEqual(Object) - 类 中的静态方法org.apache.spark.ExpireDeadHosts
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.DirectPoolMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.GarbageCollectionMetrics
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.JVMHeapMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.JVMOffHeapMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.MappedPoolMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.OffHeapExecutionMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.OffHeapStorageMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.OffHeapUnifiedMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.OnHeapExecutionMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.OnHeapStorageMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.OnHeapUnifiedMemory
 
canEqual(Object) - 类 中的静态方法org.apache.spark.metrics.ProcessTreeMetrics
 
canEqual(Object) - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
canEqual(Object) - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
canEqual(Object) - 类 中的静态方法org.apache.spark.Resubmitted
 
canEqual(Object) - 类 中的静态方法org.apache.spark.rpc.netty.OnStart
 
canEqual(Object) - 类 中的静态方法org.apache.spark.rpc.netty.OnStop
 
canEqual(Object) - 类 中的静态方法org.apache.spark.scheduler.AllJobsCancelled
 
canEqual(Object) - 类 中的方法org.apache.spark.scheduler.cluster.ExecutorInfo
 
canEqual(Object) - 类 中的静态方法org.apache.spark.scheduler.JobSucceeded
 
canEqual(Object) - 类 中的静态方法org.apache.spark.scheduler.ResubmitFailedStages
 
canEqual(Object) - 类 中的静态方法org.apache.spark.scheduler.StopCoordinator
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.sources.AlwaysFalse
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.sources.AlwaysTrue
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.DateType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.LongType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.NullType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.StringType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
canEqual(Object) - 类 中的静态方法org.apache.spark.StopMapOutputTracker
 
canEqual(Object) - 类 中的静态方法org.apache.spark.streaming.kinesis.DefaultCredentials
 
canEqual(Object) - 类 中的静态方法org.apache.spark.streaming.scheduler.AllReceiverIds
 
canEqual(Object) - 类 中的静态方法org.apache.spark.streaming.scheduler.GetAllReceiverInfo
 
canEqual(Object) - 类 中的静态方法org.apache.spark.streaming.scheduler.StopAllReceivers
 
canEqual(Object) - 类 中的静态方法org.apache.spark.Success
 
canEqual(Object) - 类 中的静态方法org.apache.spark.TaskResultLost
 
canEqual(Object) - 类 中的静态方法org.apache.spark.TaskSchedulerIsSet
 
canEqual(Object) - 类 中的静态方法org.apache.spark.UnknownReason
 
canEqual(Object) - 类 中的方法org.apache.spark.util.MutablePair
 
canHandle(String) - 类 中的方法org.apache.spark.sql.jdbc.AggregatedDialect
 
canHandle(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
canHandle(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
canHandle(String) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
Check if this dialect instance can handle a certain jdbc url.
canHandle(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
canHandle(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
canHandle(String) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
canHandle(String) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
canHandle(String) - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
canHandle(String) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
CanonicalRandomVertexCut$() - 类 的构造器org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
 
canWrite(DataType, DataType, boolean, Function2<String, String, Object>, String, Enumeration.Value, Function1<String, BoxedUnit>) - 类 中的静态方法org.apache.spark.sql.types.DataType
Returns true if the write data type can be read using the read data type.
capabilities() - 接口 中的方法org.apache.spark.sql.connector.catalog.Table
Returns the set of capabilities for this table.
cartesian(JavaRDDLike<U, ?>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.
cartesian(RDD<U>, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.RDD
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.
CaseInsensitiveStringMap - org.apache.spark.sql.util中的类
Case-insensitive map of string keys to string values.
CaseInsensitiveStringMap(Map<String, String>) - 类 的构造器org.apache.spark.sql.util.CaseInsensitiveStringMap
 
caseSensitive() - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
Whether to do a case sensitive comparison over the stop words.
cast(DataType) - 类 中的方法org.apache.spark.sql.Column
Casts the column to a different data type.
cast(String) - 类 中的方法org.apache.spark.sql.Column
Casts the column to a different data type, using the canonical string representation of the type.
Catalog - org.apache.spark.sql.catalog中的类
Catalog interface for Spark.
Catalog() - 类 的构造器org.apache.spark.sql.catalog.Catalog
 
catalog() - 类 中的方法org.apache.spark.sql.SparkSession
 
CatalogAndIdentifier() - 接口 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog
 
CatalogAndIdentifierParts() - 接口 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog
 
CatalogAndIdentifierParts() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifierParts
 
CatalogAndIdentifierParts$() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifierParts$
 
CatalogAndNamespace() - 接口 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog
 
CatalogAndNamespace() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace
 
CatalogAndNamespace$() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace$
 
CatalogExtension - org.apache.spark.sql.connector.catalog中的接口
An API to extend the Spark built-in session catalog.
CatalogHelper(CatalogPlugin) - 类 的构造器org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
 
catalogManager() - 接口 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog
 
CatalogNotFoundException - org.apache.spark.sql.connector.catalog中的异常错误
 
CatalogNotFoundException(String, Throwable) - 异常错误 的构造器org.apache.spark.sql.connector.catalog.CatalogNotFoundException
 
CatalogNotFoundException(String) - 异常错误 的构造器org.apache.spark.sql.connector.catalog.CatalogNotFoundException
 
CatalogObjectIdentifier() - 接口 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog
 
CatalogObjectIdentifier() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogObjectIdentifier
 
CatalogObjectIdentifier$() - 类 的构造器org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogObjectIdentifier$
 
CatalogPlugin - org.apache.spark.sql.connector.catalog中的接口
A marker interface to provide a catalog implementation for Spark.
Catalogs - org.apache.spark.sql.connector.catalog中的类
 
catalogString() - 类 中的方法org.apache.spark.sql.types.ArrayType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
catalogString() - 类 中的方法org.apache.spark.sql.types.DataType
String representation for the type saved in external catalogs.
catalogString() - 类 中的静态方法org.apache.spark.sql.types.DateType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.LongType
 
catalogString() - 类 中的方法org.apache.spark.sql.types.MapType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.NullType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.StringType
 
catalogString() - 类 中的方法org.apache.spark.sql.types.StructType
 
catalogString() - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
CatalogV2Implicits - org.apache.spark.sql.connector.catalog中的类
Conversion helpers for working with v2 CatalogPlugin.
CatalogV2Implicits() - 类 的构造器org.apache.spark.sql.connector.catalog.CatalogV2Implicits
 
CatalogV2Implicits.BucketSpecHelper - org.apache.spark.sql.connector.catalog中的类
 
CatalogV2Implicits.CatalogHelper - org.apache.spark.sql.connector.catalog中的类
 
CatalogV2Implicits.IdentifierHelper - org.apache.spark.sql.connector.catalog中的类
 
CatalogV2Implicits.MultipartIdentifierHelper - org.apache.spark.sql.connector.catalog中的类
 
CatalogV2Implicits.NamespaceHelper - org.apache.spark.sql.connector.catalog中的类
 
CatalogV2Implicits.PartitionTypeHelper - org.apache.spark.sql.connector.catalog中的类
 
CatalogV2Implicits.TransformHelper - org.apache.spark.sql.connector.catalog中的类
 
CatalogV2Util - org.apache.spark.sql.connector.catalog中的类
 
CatalogV2Util() - 类 的构造器org.apache.spark.sql.connector.catalog.CatalogV2Util
 
CatalystScan - org.apache.spark.sql.sources中的接口
::Experimental:: An interface for experimenting with a more direct connection to the query planner.
Categorical() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.FeatureType
 
categoricalCols() - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
Numeric columns to treat as categorical features.
categoricalFeaturesInfo() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
CategoricalSplit - org.apache.spark.ml.tree中的类
Split which tests a categorical feature.
categories() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
categories() - 类 中的方法org.apache.spark.mllib.tree.model.Split
 
categoryMaps() - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
categorySizes() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
cause() - 异常错误 中的方法org.apache.spark.sql.AnalysisException
 
cause() - 异常错误 中的方法org.apache.spark.sql.streaming.StreamingQueryException
 
CausedBy - org.apache.spark.util中的类
Extractor Object for pulling out the root cause of an error.
CausedBy() - 类 的构造器org.apache.spark.util.CausedBy
 
cbrt(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the cube-root of the given value.
cbrt(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the cube-root of the given column.
ceil(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the ceiling of the given value.
ceil(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the ceiling of the given column.
ceil() - 类 中的方法org.apache.spark.sql.types.Decimal
 
censorCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
censorCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
censorCol() - 接口 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionParams
Param for censor column name.
chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, T, T>>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<U>>, Function0<Parsers.Parser<Function2<T, U, T>>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
chainr1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, U, U>>>, Function2<T, U, U>, U) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
changePrecision(int, int) - 类 中的方法org.apache.spark.sql.types.Decimal
Update precision and scale while keeping our value the same, and return true if successful.
channel() - 接口 中的方法org.apache.spark.shuffle.api.WritableByteChannelWrapper
The underlying channel to write bytes into.
channelRead0(ChannelHandlerContext, byte[]) - 类 中的方法org.apache.spark.api.r.RBackendAuthHandler
 
CharType - org.apache.spark.sql.types中的类
Hive char type.
CharType(int) - 类 的构造器org.apache.spark.sql.types.CharType
 
checkAndGetK8sMasterUrl(String) - 类 中的静态方法org.apache.spark.util.Utils
Check the validity of the given Kubernetes master URL and return the resolved URL.
checkColumnNameDuplication(Seq<String>, String, Function2<String, String, Object>) - 类 中的静态方法org.apache.spark.sql.util.SchemaUtils
Checks if input column names have duplicate identifiers.
checkColumnNameDuplication(Seq<String>, String, boolean) - 类 中的静态方法org.apache.spark.sql.util.SchemaUtils
Checks if input column names have duplicate identifiers.
checkColumnType(StructType, String, DataType, String) - 类 中的静态方法org.apache.spark.ml.util.SchemaUtils
Check whether the given schema contains a column of the required data type.
checkColumnTypes(StructType, String, Seq<DataType>, String) - 类 中的静态方法org.apache.spark.ml.util.SchemaUtils
Check whether the given schema contains a column of one of the require data types.
checkDataColumns(RFormula, Dataset<?>) - 类 中的静态方法org.apache.spark.ml.r.RWrapperUtils
DataFrame column check.
checkedCast() - 接口 中的方法org.apache.spark.ml.recommendation.ALSModelParams
Attempts to safely cast a user/item id to an Int.
checkFileExists(String, Configuration) - 类 中的静态方法org.apache.spark.streaming.util.HdfsUtils
Check if the file exists at the given path.
checkHost(String) - 类 中的静态方法org.apache.spark.util.Utils
 
checkHostPort(String) - 类 中的静态方法org.apache.spark.util.Utils
 
checkNumericType(StructType, String, String) - 类 中的静态方法org.apache.spark.ml.util.SchemaUtils
Check whether the given schema contains a column of the numeric data type.
checkpoint() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Mark this RDD for checkpointing.
checkpoint() - 类 中的方法org.apache.spark.graphx.Graph
Mark this Graph for checkpointing.
checkpoint() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
checkpoint() - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
checkpoint() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
checkpoint() - 类 中的方法org.apache.spark.rdd.HadoopRDD
 
checkpoint() - 类 中的方法org.apache.spark.rdd.RDD
Mark this RDD for checkpointing.
checkpoint() - 类 中的方法org.apache.spark.sql.Dataset
Eagerly checkpoint a Dataset and return the new Dataset.
checkpoint(boolean) - 类 中的方法org.apache.spark.sql.Dataset
Returns a checkpointed version of this Dataset.
checkpoint(Duration) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Enable periodic checkpointing of RDDs of this DStream.
checkpoint(String) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Sets the context to periodically checkpoint the DStream operations for master fault-tolerance.
checkpoint(Duration) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Enable periodic checkpointing of RDDs of this DStream
checkpoint(String) - 类 中的方法org.apache.spark.streaming.StreamingContext
Set the context to periodically checkpoint the DStream operations for driver fault-tolerance.
checkpointCleaned(long) - 接口 中的方法org.apache.spark.CleanerListener
 
Checkpointed() - 类 中的静态方法org.apache.spark.rdd.CheckpointState
 
CheckpointingInProgress() - 类 中的静态方法org.apache.spark.rdd.CheckpointState
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
checkpointInterval() - 接口 中的方法org.apache.spark.ml.param.shared.HasCheckpointInterval
Param for set checkpoint interval (&gt;= 1) or disable checkpoint (-1).
checkpointInterval() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
checkpointInterval() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
checkpointInterval() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
CheckpointReader - org.apache.spark.streaming中的类
 
CheckpointReader() - 类 的构造器org.apache.spark.streaming.CheckpointReader
 
CheckpointState - org.apache.spark.rdd中的类
Enumeration to manage state transitions of an RDD through checkpointing [ Initialized --> checkpointing in progress --> checkpointed ]
CheckpointState() - 类 的构造器org.apache.spark.rdd.CheckpointState
 
checkSchemaColumnNameDuplication(StructType, String, boolean) - 类 中的静态方法org.apache.spark.sql.util.SchemaUtils
Checks if an input schema has duplicate column names.
checkSingleVsMultiColumnParams(Params, Seq<Param<?>>, Seq<Param<?>>) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Utility for Param validity checks for Transformers which have both single- and multi-column support.
checkSpeculatableTasks(int) - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
checkState(boolean, Function0<String>) - 类 中的静态方法org.apache.spark.streaming.util.HdfsUtils
 
checkThresholdConsistency() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
If threshold and thresholds are both set, ensures they are consistent.
checkTransformDuplication(Seq<Transform>, String, boolean) - 类 中的静态方法org.apache.spark.sql.util.SchemaUtils
Checks if the partitioning transforms are being duplicated or not.
child() - 类 中的方法org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
child() - 类 中的方法org.apache.spark.sql.sources.Not
 
CHILD_CONNECTION_TIMEOUT - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Maximum time (in ms) to wait for a child process to connect back to the launcher server when using @link{#start()}.
CHILD_PROCESS_LOGGER_NAME - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Logger name to use when launching a child process.
ChildFirstURLClassLoader - org.apache.spark.util中的类
A mutable class loader that gives preference to its own URLs over the parent class loader when loading classes and resources.
ChildFirstURLClassLoader(URL[], ClassLoader) - 类 的构造器org.apache.spark.util.ChildFirstURLClassLoader
 
chiSqFunc() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTest.Method
 
ChiSqSelector - org.apache.spark.ml.feature中的类
Chi-Squared feature selection, which selects categorical features to use for predicting a categorical label.
ChiSqSelector(String) - 类 的构造器org.apache.spark.ml.feature.ChiSqSelector
 
ChiSqSelector() - 类 的构造器org.apache.spark.ml.feature.ChiSqSelector
 
ChiSqSelector - org.apache.spark.mllib.feature中的类
Creates a ChiSquared feature selector.
ChiSqSelector() - 类 的构造器org.apache.spark.mllib.feature.ChiSqSelector
 
ChiSqSelector(int) - 类 的构造器org.apache.spark.mllib.feature.ChiSqSelector
The is the same to call this() and setNumTopFeatures(numTopFeatures)
ChiSqSelectorModel - org.apache.spark.ml.feature中的类
Model fitted by ChiSqSelector.
ChiSqSelectorModel - org.apache.spark.mllib.feature中的类
Chi Squared selector model.
ChiSqSelectorModel(int[]) - 类 的构造器org.apache.spark.mllib.feature.ChiSqSelectorModel
 
ChiSqSelectorModel.SaveLoadV1_0$ - org.apache.spark.mllib.feature中的类
 
ChiSqSelectorModel.SaveLoadV1_0$.Data - org.apache.spark.mllib.feature中的类
Model data for import/export
ChiSqSelectorModel.SaveLoadV1_0$.Data$ - org.apache.spark.mllib.feature中的类
 
ChiSqSelectorParams - org.apache.spark.ml.feature中的接口
chiSqTest(Vector, Vector) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Conduct Pearson's chi-squared goodness of fit test of the observed data against the expected distribution.
chiSqTest(Vector) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Conduct Pearson's chi-squared goodness of fit test of the observed data against the uniform distribution, with each category having an expected frequency of 1 / observed.size.
chiSqTest(Matrix) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Conduct Pearson's independence test on the input contingency matrix, which cannot contain negative entries or columns or rows that sum up to 0.
chiSqTest(RDD<LabeledPoint>) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Conduct Pearson's independence test for every feature against the label across the input RDD.
chiSqTest(JavaRDD<LabeledPoint>) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Java-friendly version of chiSqTest()
ChiSqTest - org.apache.spark.mllib.stat.test中的类
Conduct the chi-squared test for the input RDDs using the specified method.
ChiSqTest() - 类 的构造器org.apache.spark.mllib.stat.test.ChiSqTest
 
ChiSqTest.Method - org.apache.spark.mllib.stat.test中的类
param: name String name for the method.
ChiSqTest.Method$ - org.apache.spark.mllib.stat.test中的类
 
ChiSqTest.NullHypothesis$ - org.apache.spark.mllib.stat.test中的类
 
ChiSqTestResult - org.apache.spark.mllib.stat.test中的类
Object containing the test results for the chi-squared hypothesis test.
chiSquared(Vector, Vector, String) - 类 中的静态方法org.apache.spark.mllib.stat.test.ChiSqTest
 
chiSquaredFeatures(RDD<LabeledPoint>, String) - 类 中的静态方法org.apache.spark.mllib.stat.test.ChiSqTest
Conduct Pearson's independence test for each feature against the label across the input RDD.
chiSquaredMatrix(Matrix, String) - 类 中的静态方法org.apache.spark.mllib.stat.test.ChiSqTest
 
ChiSquareTest - org.apache.spark.ml.stat中的类
Chi-square hypothesis testing for categorical data.
ChiSquareTest() - 类 的构造器org.apache.spark.ml.stat.ChiSquareTest
 
chmod700(File) - 类 中的静态方法org.apache.spark.util.Utils
JDK equivalent of chmod 700 file.
CholeskyDecomposition - org.apache.spark.mllib.linalg中的类
Compute Cholesky decomposition.
CholeskyDecomposition() - 类 的构造器org.apache.spark.mllib.linalg.CholeskyDecomposition
 
cipherStream() - 接口 中的方法org.apache.spark.security.CryptoStreamUtils.BaseErrorHandler
The encrypted stream that may get into an unhealthy state.
classForName(String, boolean, boolean) - 类 中的静态方法org.apache.spark.util.Utils
Preferred alternative to Class.forName(className), as well as Class.forName(className, initialize, loader) with current thread's ContextClassLoader.
Classification() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.Algo
 
ClassificationLoss - org.apache.spark.mllib.tree.loss中的接口
 
ClassificationModel<FeaturesType,M extends ClassificationModel<FeaturesType,M>> - org.apache.spark.ml.classification中的类
:: DeveloperApi :: Model produced by a Classifier.
ClassificationModel() - 类 的构造器org.apache.spark.ml.classification.ClassificationModel
 
ClassificationModel - org.apache.spark.mllib.classification中的接口
Represents a classification model that predicts to which of a set of categories an example belongs.
Classifier<FeaturesType,E extends Classifier<FeaturesType,E,M>,M extends ClassificationModel<FeaturesType,M>> - org.apache.spark.ml.classification中的类
:: DeveloperApi :: Single-label binary or multiclass classification.
Classifier() - 类 的构造器org.apache.spark.ml.classification.Classifier
 
classifier() - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
classifier() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
classifier() - 接口 中的方法org.apache.spark.ml.classification.OneVsRestParams
param for the base binary classifier that we reduce multiclass classification into.
ClassifierParams - org.apache.spark.ml.classification中的接口
(private[spark]) Params for classification.
ClassifierTypeTrait - org.apache.spark.ml.classification中的接口
 
classIsLoadable(String) - 类 中的静态方法org.apache.spark.util.Utils
Determines whether the provided class is loadable in the current thread.
className() - 类 中的方法org.apache.spark.ExceptionFailure
 
className() - 类 中的静态方法org.apache.spark.ml.linalg.JsonMatrixConverter
Unique class name for identifying JSON object encoded by this class.
className() - 类 中的方法org.apache.spark.sql.catalog.Function
 
classpathEntries() - 类 中的方法org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
 
classTag() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
 
classTag() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
 
classTag() - 类 中的方法org.apache.spark.api.java.JavaRDD
 
classTag() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
 
classTag() - 类 中的方法org.apache.spark.sql.Dataset
 
classTag() - 类 中的方法org.apache.spark.storage.memory.DeserializedMemoryEntry
 
classTag() - 接口 中的方法org.apache.spark.storage.memory.MemoryEntry
 
classTag() - 类 中的方法org.apache.spark.storage.memory.SerializedMemoryEntry
 
classTag() - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
 
classTag() - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
 
classTag() - 类 中的方法org.apache.spark.streaming.api.java.JavaInputDStream
 
classTag() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
 
classTag() - 类 中的方法org.apache.spark.streaming.api.java.JavaReceiverInputDStream
 
clean(long, boolean) - 类 中的方法org.apache.spark.streaming.util.WriteAheadLog
Clean all the records that are older than the threshold time.
clean(Object, boolean, boolean) - 类 中的静态方法org.apache.spark.util.ClosureCleaner
Clean the given closure in place.
CleanAccum - org.apache.spark中的类
 
CleanAccum(long) - 类 的构造器org.apache.spark.CleanAccum
 
CleanBroadcast - org.apache.spark中的类
 
CleanBroadcast(long) - 类 的构造器org.apache.spark.CleanBroadcast
 
CleanCheckpoint - org.apache.spark中的类
 
CleanCheckpoint(int) - 类 的构造器org.apache.spark.CleanCheckpoint
 
CLEANER_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.History
 
CLEANER_INTERVAL_S() - 类 中的静态方法org.apache.spark.internal.config.History
 
CleanerListener - org.apache.spark中的接口
Listener class used for testing when any item has been cleaned by the Cleaner class.
cleaning() - 类 中的方法org.apache.spark.status.LiveStage
 
CleanRDD - org.apache.spark中的类
 
CleanRDD(int) - 类 的构造器org.apache.spark.CleanRDD
 
CleanShuffle - org.apache.spark中的类
 
CleanShuffle(int) - 类 的构造器org.apache.spark.CleanShuffle
 
cleanupApplication() - 接口 中的方法org.apache.spark.shuffle.api.ShuffleDriverComponents
Called once at the end of the Spark application to clean up any existing shuffle state.
CleanupDynamicPruningFilters - org.apache.spark.sql.dynamicpruning中的类
Removes the filter nodes with dynamic pruning that were not pushed down to the scan.
CleanupDynamicPruningFilters() - 类 的构造器org.apache.spark.sql.dynamicpruning.CleanupDynamicPruningFilters
 
cleanupOldBlocks(long) - 接口 中的方法org.apache.spark.streaming.receiver.ReceivedBlockHandler
Cleanup old blocks older than the given threshold time
CleanupTask - org.apache.spark中的接口
Classes that represent cleaning tasks.
CleanupTaskWeakReference - org.apache.spark中的类
A WeakReference associated with a CleanupTask.
CleanupTaskWeakReference(CleanupTask, Object, ReferenceQueue<Object>) - 类 的构造器org.apache.spark.CleanupTaskWeakReference
 
clear(Param<?>) - 接口 中的方法org.apache.spark.ml.param.Params
Clears the user-supplied value for the input param.
clear() - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
clear() - 类 中的方法org.apache.spark.sql.util.ExecutionListenerManager
Removes all the registered QueryExecutionListener.
clear() - 类 中的静态方法org.apache.spark.util.AccumulatorContext
Clears all registered AccumulatorV2s.
clearActiveSession() - 类 中的静态方法org.apache.spark.sql.SparkSession
Clears the active SparkSession for current thread.
clearCache() - 类 中的方法org.apache.spark.sql.catalog.Catalog
Removes all cached tables from the in-memory cache.
clearCache() - 类 中的方法org.apache.spark.sql.SQLContext
Removes all cached tables from the in-memory cache.
clearCallSite() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Pass-through to SparkContext.setCallSite.
clearCallSite() - 类 中的方法org.apache.spark.SparkContext
Clear the thread-local property for overriding the call sites of actions and RDDs.
clearDefaultSession() - 类 中的静态方法org.apache.spark.sql.SparkSession
Clears the default SparkSession that is returned by the builder.
clearDependencies() - 类 中的方法org.apache.spark.rdd.CoGroupedRDD
 
clearDependencies() - 类 中的方法org.apache.spark.rdd.ShuffledRDD
 
clearDependencies() - 类 中的方法org.apache.spark.rdd.UnionRDD
 
clearJobGroup() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Clear the current thread's job group ID and its description.
clearJobGroup() - 类 中的方法org.apache.spark.SparkContext
Clear the current thread's job group ID and its description.
clearThreshold() - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionModel
Clears the threshold so that predict will output raw prediction scores.
clearThreshold() - 类 中的方法org.apache.spark.mllib.classification.SVMModel
Clears the threshold so that predict will output raw prediction scores.
Clock - org.apache.spark.util中的接口
An interface to represent clocks, so that they can be mocked out in unit tests.
CLogLog$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
 
clone() - 类 中的方法org.apache.spark.SparkConf
Copy this object
clone() - 类 中的方法org.apache.spark.sql.ExperimentalMethods
 
clone() - 类 中的方法org.apache.spark.sql.types.Decimal
 
clone() - 类 中的方法org.apache.spark.storage.StorageLevel
 
clone() - 类 中的方法org.apache.spark.util.random.BernoulliCellSampler
 
clone() - 类 中的方法org.apache.spark.util.random.BernoulliSampler
 
clone() - 类 中的方法org.apache.spark.util.random.PoissonSampler
 
clone() - 接口 中的方法org.apache.spark.util.random.RandomSampler
return a copy of the RandomSampler object
clone(T, SerializerInstance, ClassTag<T>) - 类 中的静态方法org.apache.spark.util.Utils
Clone an object using a Spark serializer.
cloneComplement() - 类 中的方法org.apache.spark.util.random.BernoulliCellSampler
Return a sampler that is the complement of the range specified of the current sampler.
cloneProperties(Properties) - 类 中的静态方法org.apache.spark.util.Utils
Create a new properties object with the same values as `props`
close() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
close() - 类 中的方法org.apache.spark.io.NioBufferedFileInputStream
 
close() - 类 中的方法org.apache.spark.io.ReadAheadInputStream
 
close() - 接口 中的方法org.apache.spark.security.CryptoStreamUtils.BaseErrorHandler
 
close() - 类 中的方法org.apache.spark.serializer.DeserializationStream
 
close() - 类 中的方法org.apache.spark.serializer.SerializationStream
 
close(Throwable) - 类 中的方法org.apache.spark.sql.ForeachWriter
Called when stopping to process one partition of new data in the executor side.
close() - 类 中的方法org.apache.spark.sql.hive.execution.HiveOutputWriter
 
close() - 类 中的方法org.apache.spark.sql.SparkSession
Synonym for stop().
close() - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
close() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarBatch
Called to close all the columns in this batch.
close() - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Cleans up memory for this column vector.
close() - 类 中的方法org.apache.spark.storage.BufferReleasingInputStream
 
close() - 类 中的方法org.apache.spark.storage.CountingWritableChannel
 
close() - 类 中的方法org.apache.spark.storage.TimeTrackingOutputStream
 
close() - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
 
close() - 类 中的方法org.apache.spark.streaming.util.WriteAheadLog
Close this log and release any resources.
closeWriter(TaskAttemptContext) - 类 中的方法org.apache.spark.internal.io.HadoopWriteConfigUtil
 
ClosureCleaner - org.apache.spark.util中的类
A cleaner that renders closures serializable if they can be done so safely.
ClosureCleaner() - 类 的构造器org.apache.spark.util.ClosureCleaner
 
closureSerializer() - 类 中的方法org.apache.spark.SparkEnv
 
cls() - 类 中的方法org.apache.spark.sql.types.ObjectType
 
cls() - 类 中的方法org.apache.spark.util.MethodIdentifier
 
clsTag() - 接口 中的方法org.apache.spark.sql.Encoder
A ClassTag that can be used to construct an Array to contain a collection of T.
cluster() - 类 中的方法org.apache.spark.ml.clustering.ClusteringSummary
 
cluster() - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
 
clusterCenter() - 类 中的方法org.apache.spark.ml.clustering.ClusterData
 
clusterCenters() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
clusterCenters() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
clusterCenters() - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
Leaf cluster centers.
clusterCenters() - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel
 
clusterCenters() - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeansModel
 
ClusterData - org.apache.spark.ml.clustering中的类
Helper class for storing model data
ClusterData(int, Vector) - 类 的构造器org.apache.spark.ml.clustering.ClusterData
 
clusteredColumns - 类 中的变量org.apache.spark.sql.connector.read.partitioning.ClusteredDistribution
The names of the clustered columns.
ClusteredDistribution - org.apache.spark.sql.connector.read.partitioning中的类
A concrete implementation of Distribution.
ClusteredDistribution(String[]) - 类 的构造器org.apache.spark.sql.connector.read.partitioning.ClusteredDistribution
 
clusterIdx() - 类 中的方法org.apache.spark.ml.clustering.ClusterData
 
ClusteringEvaluator - org.apache.spark.ml.evaluation中的类
Evaluator for clustering results.
ClusteringEvaluator(String) - 类 的构造器org.apache.spark.ml.evaluation.ClusteringEvaluator
 
ClusteringEvaluator() - 类 的构造器org.apache.spark.ml.evaluation.ClusteringEvaluator
 
ClusteringSummary - org.apache.spark.ml.clustering中的类
Summary of clustering algorithms.
CLUSTERS_CONFIG_PREFIX() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenSparkConf
 
clusterSizes() - 类 中的方法org.apache.spark.ml.clustering.ClusteringSummary
 
ClusterStats(Vector, double, long) - 类 的构造器org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
 
ClusterStats$() - 类 的构造器org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats$
 
clusterWeights() - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeansModel
 
cn() - 类 中的方法org.apache.spark.mllib.feature.VocabWord
 
coalesce(int) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, RDD<?>) - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
Runs the packing algorithm and returns an array of PartitionGroups that if possible are load balanced and grouped by locality
coalesce(int, RDD<?>) - 接口 中的方法org.apache.spark.rdd.PartitionCoalescer
Coalesce the partitions of the given RDD.
coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset that has exactly numPartitions partitions, when the fewer partitions are requested.
coalesce(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Returns the first column that is not null, or null if all inputs are null.
coalesce(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns the first column that is not null, or null if all inputs are null.
CoarseGrainedClusterMessage - org.apache.spark.scheduler.cluster中的接口
 
CoarseGrainedClusterMessages - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages
 
CoarseGrainedClusterMessages.AddWebUIFilter - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.AddWebUIFilter$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.GetExecutorLossReason - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.GetExecutorLossReason$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.KillExecutors - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.KillExecutors$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.KillExecutorsOnHost - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.KillExecutorsOnHost$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.KillTask - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.KillTask$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.LaunchTask - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.LaunchTask$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RegisterClusterManager - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RegisterClusterManager$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RegisteredExecutor$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RegisterExecutor - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RegisterExecutor$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RegisterExecutorFailed - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RegisterExecutorFailed$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RegisterExecutorResponse - org.apache.spark.scheduler.cluster中的接口
 
CoarseGrainedClusterMessages.RemoveExecutor - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RemoveExecutor$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RemoveWorker - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RemoveWorker$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RequestExecutors - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RequestExecutors$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RetrieveDelegationTokens$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.RetrieveSparkAppConfig$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.ReviveOffers$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.SetupDriver - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.SetupDriver$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.Shutdown$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.SparkAppConfig - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.SparkAppConfig$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.StatusUpdate - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.StatusUpdate$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.StopDriver$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.StopExecutor$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.StopExecutors$ - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.UpdateDelegationTokens - org.apache.spark.scheduler.cluster中的类
 
CoarseGrainedClusterMessages.UpdateDelegationTokens$ - org.apache.spark.scheduler.cluster中的类
 
code() - 类 中的方法org.apache.spark.mllib.feature.VocabWord
 
CodegenMetrics - org.apache.spark.metrics.source中的类
Metrics for code generation.
CodegenMetrics() - 类 的构造器org.apache.spark.metrics.source.CodegenMetrics
 
codeLen() - 类 中的方法org.apache.spark.mllib.feature.VocabWord
 
coefficientMatrix() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
coefficients() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
coefficients() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
A vector of model coefficients for "binomial" logistic regression.
coefficients() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
coefficients() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
coefficients() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
coefficientStandardErrors() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
 
coefficientStandardErrors() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
cogroup(JavaPairRDD<K, W>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(JavaPairRDD<K, W>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(JavaPairRDD<K, W>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(RDD<Tuple2<K, W>>, Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(RDD<Tuple2<K, W>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(RDD<Tuple2<K, W>>, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(KeyValueGroupedDataset<K, U>, Function3<K, Iterator<V>, Iterator<U>, TraversableOnce<R>>, Encoder<R>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Scala-specific) Applies the given function to each cogrouped data.
cogroup(KeyValueGroupedDataset<K, U>, CoGroupFunction<K, V, U, R>, Encoder<R>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Java-specific) Applies the given function to each cogrouped data.
cogroup(JavaPairDStream<K, W>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(JavaPairDStream<K, W>, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(JavaPairDStream<K, W>, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(DStream<Tuple2<K, W>>, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(DStream<Tuple2<K, W>>, int, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
CoGroupedRDD<K> - org.apache.spark.rdd中的类
:: DeveloperApi :: An RDD that cogroups its parents.
CoGroupedRDD(Seq<RDD<? extends Product2<K, ?>>>, Partitioner, ClassTag<K>) - 类 的构造器org.apache.spark.rdd.CoGroupedRDD
 
CoGroupFunction<K,V1,V2,R> - org.apache.spark.api.java.function中的接口
A function that returns zero or more output records from each grouping key and its values from 2 Datasets.
col(String) - 类 中的方法org.apache.spark.sql.Dataset
Selects column based on the column name and returns it as a Column.
col(String) - 类 中的静态方法org.apache.spark.sql.functions
Returns a Column based on the given column name.
COL_POS_KEY() - 类 中的静态方法org.apache.spark.sql.Dataset
 
coldStartStrategy() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
coldStartStrategy() - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
coldStartStrategy() - 接口 中的方法org.apache.spark.ml.recommendation.ALSModelParams
Param for strategy for dealing with unknown or new users/items at prediction time.
colIter() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
colIter() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Returns an iterator of column vectors.
colIter() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
colIter() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
colIter() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Returns an iterator of column vectors.
colIter() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
collect() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an array that contains all of the elements in this RDD.
collect() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
collect() - 类 中的方法org.apache.spark.rdd.RDD
Return an array that contains all of the elements in this RDD.
collect(PartialFunction<T, U>, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD that contains all matching values by applying f.
collect() - 类 中的方法org.apache.spark.sql.Dataset
Returns an array that contains all rows in this Dataset.
collect_list(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns a list of objects with duplicates.
collect_list(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns a list of objects with duplicates.
collect_set(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns a set of objects with duplicate elements eliminated.
collect_set(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns a set of objects with duplicate elements eliminated.
collectAsList() - 类 中的方法org.apache.spark.sql.Dataset
Returns a Java list that contains all rows in this Dataset.
collectAsMap() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return the key-value pairs in this RDD to the master as a Map.
collectAsMap() - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return the key-value pairs in this RDD to the master as a Map.
collectAsync() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
The asynchronous version of collect, which returns a future for retrieving an array containing all of the elements in this RDD.
collectAsync() - 类 中的方法org.apache.spark.rdd.AsyncRDDActions
Returns a future for retrieving all elements of this RDD.
collectEdges(EdgeDirection) - 类 中的方法org.apache.spark.graphx.GraphOps
Returns an RDD that contains for each vertex v its local edges, i.e., the edges that are incident on v, in the user-specified direction.
collectionAccumulator() - 类 中的方法org.apache.spark.SparkContext
Create and register a CollectionAccumulator, which starts with empty list and accumulates inputs by adding them into the list.
collectionAccumulator(String) - 类 中的方法org.apache.spark.SparkContext
Create and register a CollectionAccumulator, which starts with empty list and accumulates inputs by adding them into the list.
CollectionAccumulator<T> - org.apache.spark.util中的类
An accumulator for collecting a list of elements.
CollectionAccumulator() - 类 的构造器org.apache.spark.util.CollectionAccumulator
 
CollectionsUtils - org.apache.spark.util中的类
 
CollectionsUtils() - 类 的构造器org.apache.spark.util.CollectionsUtils
 
collectNeighborIds(EdgeDirection) - 类 中的方法org.apache.spark.graphx.GraphOps
Collect the neighbor vertex ids for each vertex.
collectNeighbors(EdgeDirection) - 类 中的方法org.apache.spark.graphx.GraphOps
Collect the neighbor vertex attributes for each vertex.
collectPartitions(int[]) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an array that contains all of the elements in a specific partition of this RDD.
collectSubModels() - 接口 中的方法org.apache.spark.ml.param.shared.HasCollectSubModels
Param for whether to collect a list of sub-models trained during tuning.
collectSubModels() - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
collectSubModels() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
colPtrs() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
colPtrs() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
colRegex(String) - 类 中的方法org.apache.spark.sql.Dataset
Selects column based on the column name specified as a regex and returns it as Column.
colsPerBlock() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
colStats(RDD<Vector>) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Computes column-wise summary statistics for the input RDD[Vector].
Column - org.apache.spark.sql.catalog中的类
A column in Spark, as returned by listColumns method in Catalog.
Column(String, String, String, boolean, boolean, boolean) - 类 的构造器org.apache.spark.sql.catalog.Column
 
Column - org.apache.spark.sql中的类
A column that will be computed based on the data in a DataFrame.
Column(Expression) - 类 的构造器org.apache.spark.sql.Column
 
Column(String) - 类 的构造器org.apache.spark.sql.Column
 
column(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Expressions
Create a named reference expression for a column.
column(String) - 类 中的静态方法org.apache.spark.sql.functions
Returns a Column based on the given column name.
column(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarBatch
Returns the column at `ordinal`.
ColumnarArray - org.apache.spark.sql.vectorized中的类
Array abstraction in ColumnVector.
ColumnarArray(ColumnVector, int, int) - 类 的构造器org.apache.spark.sql.vectorized.ColumnarArray
 
ColumnarBatch - org.apache.spark.sql.vectorized中的类
This class wraps multiple ColumnVectors as a row-wise table.
ColumnarBatch(ColumnVector[]) - 类 的构造器org.apache.spark.sql.vectorized.ColumnarBatch
 
ColumnarBatch(ColumnVector[], int) - 类 的构造器org.apache.spark.sql.vectorized.ColumnarBatch
Create a new batch from existing column vectors.
ColumnarMap - org.apache.spark.sql.vectorized中的类
Map abstraction in ColumnVector.
ColumnarMap(ColumnVector, ColumnVector, int, int) - 类 的构造器org.apache.spark.sql.vectorized.ColumnarMap
 
ColumnarRow - org.apache.spark.sql.vectorized中的类
Row abstraction in ColumnVector.
ColumnarRow(ColumnVector, int) - 类 的构造器org.apache.spark.sql.vectorized.ColumnarRow
 
ColumnName - org.apache.spark.sql中的类
A convenient class used for constructing schema.
ColumnName(String) - 类 的构造器org.apache.spark.sql.ColumnName
 
ColumnPruner - org.apache.spark.ml.feature中的类
Utility transformer for removing temporary columns from a DataFrame.
ColumnPruner(String, Set<String>) - 类 的构造器org.apache.spark.ml.feature.ColumnPruner
 
ColumnPruner(Set<String>) - 类 的构造器org.apache.spark.ml.feature.ColumnPruner
 
columns() - 类 中的方法org.apache.spark.sql.Dataset
Returns all column names as an array.
columnSchema() - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
Schema for the image column: Row(String, Int, Int, Int, Int, Array[Byte])
columnSimilarities() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Compute all cosine similarities between columns of this matrix using the brute-force approach of computing normalized dot products.
columnSimilarities() - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Compute all cosine similarities between columns of this matrix using the brute-force approach of computing normalized dot products.
columnSimilarities(double) - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Compute similarities between columns of this matrix using a sampling approach.
columnsToPrune() - 类 中的方法org.apache.spark.ml.feature.ColumnPruner
 
columnToOldVector(Dataset<?>, String) - 类 中的静态方法org.apache.spark.ml.util.DatasetUtils
 
columnToVector(Dataset<?>, String) - 类 中的静态方法org.apache.spark.ml.util.DatasetUtils
Cast a column in a Dataset to Vector type.
ColumnVector - org.apache.spark.sql.vectorized中的类
An interface representing in-memory columnar data in Spark.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Simplified version of combineByKey that hash-partitions the output RDD and uses map-side aggregation.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Simplified version of combineByKey that hash-partitions the resulting RDD using the existing partitioner/parallelism level and using map-side aggregation.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Simplified version of combineByKeyWithClassTag that hash-partitions the resulting RDD using the existing partitioner/parallelism level.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Combine elements of each key in DStream's RDDs using custom function.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Combine elements of each key in DStream's RDDs using custom function.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, ClassTag<C>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Combine elements of each key in DStream's RDDs using custom functions.
combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer, ClassTag<C>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, int, ClassTag<C>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.
combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, ClassTag<C>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Simplified version of combineByKeyWithClassTag that hash-partitions the resulting RDD using the existing partitioner/parallelism level.
combineCombinersByKey(Iterator<? extends Product2<K, C>>, TaskContext) - 类 中的方法org.apache.spark.Aggregator
 
combineValuesByKey(Iterator<? extends Product2<K, V>>, TaskContext) - 类 中的方法org.apache.spark.Aggregator
 
CommandLineLoggingUtils - org.apache.spark.util中的接口
 
CommandLineUtils - org.apache.spark.util中的接口
Contains basic command line parsing functionality and methods to parse some common Spark CLI options.
comment() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.AddColumn
 
commit(Function0<Parsers.Parser<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
commit(Offset) - 接口 中的方法org.apache.spark.sql.connector.read.streaming.SparkDataStream
Informs the source that Spark has completed processing all data for offsets less than or equal to `end` and will only request offsets greater than `end` in the future.
commit(WriterCommitMessage[]) - 接口 中的方法org.apache.spark.sql.connector.write.BatchWrite
Commits this writing job with a list of commit messages.
commit() - 接口 中的方法org.apache.spark.sql.connector.write.DataWriter
Commits this writer after all records are written successfully, returns a commit message which will be sent back to driver side and passed to BatchWrite.commit(WriterCommitMessage[]).
commit(long, WriterCommitMessage[]) - 接口 中的方法org.apache.spark.sql.connector.write.streaming.StreamingWrite
Commits this writing job for the specified epoch with a list of commit messages.
commitAllPartitions() - 接口 中的方法org.apache.spark.shuffle.api.ShuffleMapOutputWriter
Commits the writes done by all partition writers returned by all calls to this object's ShuffleMapOutputWriter.getPartitionWriter(int), and returns the number of bytes written for each partition.
commitJob(JobContext, Seq<FileCommitProtocol.TaskCommitMessage>) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Commits a job after the writes succeed.
commitJob(JobContext, Seq<FileCommitProtocol.TaskCommitMessage>) - 类 中的方法org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
 
commitStagedChanges() - 接口 中的方法org.apache.spark.sql.connector.catalog.StagedTable
Finalize the creation or replacement of this table.
commitTask(TaskAttemptContext) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Commits a task after the writes succeed.
commitTask(TaskAttemptContext) - 类 中的方法org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
 
commitTask(OutputCommitter, TaskAttemptContext, int, int) - 类 中的静态方法org.apache.spark.mapred.SparkHadoopMapRedUtil
Commits a task output.
commonHeaderNodes(HttpServletRequest) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
comparator(Schedulable, Schedulable) - 接口 中的方法org.apache.spark.scheduler.SchedulingAlgorithm
 
compare(PartitionGroup, PartitionGroup) - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer.partitionGroupOrdering$
 
compare(byte, byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
compare(Decimal) - 类 中的方法org.apache.spark.sql.types.Decimal
 
compare(Decimal, Decimal) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
compare(Decimal, Decimal) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
compare(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
compare(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
compare(int, int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
compare(long, long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
compare(short, short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
compare(RDDInfo) - 类 中的方法org.apache.spark.storage.RDDInfo
 
compareTo(SparkShutdownHook) - 类 中的方法org.apache.spark.util.SparkShutdownHook
 
compileValue(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
compileValue(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
compileValue(Object) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
Converts value to SQL expression.
compileValue(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
compileValue(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
compileValue(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
compileValue(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
compileValue(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
compileValue(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
Complete() - 类 中的静态方法org.apache.spark.sql.streaming.OutputMode
OutputMode in which all the rows in the streaming DataFrame/Dataset will be written to the sink every time there are some updates.
completed() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
completedIndices() - 类 中的方法org.apache.spark.status.LiveJob
 
completedIndices() - 类 中的方法org.apache.spark.status.LiveStage
 
completedStages() - 类 中的方法org.apache.spark.status.LiveJob
 
completedTasks() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
completedTasks() - 类 中的方法org.apache.spark.status.LiveExecutor
 
completedTasks() - 类 中的方法org.apache.spark.status.LiveJob
 
completedTasks() - 类 中的方法org.apache.spark.status.LiveStage
 
COMPLETION_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
completionTime() - 类 中的方法org.apache.spark.scheduler.StageInfo
Time when all tasks in the stage completed or when the stage was cancelled.
completionTime() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
completionTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
completionTime() - 类 中的方法org.apache.spark.status.LiveJob
 
ComplexFutureAction<T> - org.apache.spark中的类
A FutureAction for actions that could trigger multiple Spark jobs.
ComplexFutureAction(Function1<JobSubmitter, Future<T>>) - 类 的构造器org.apache.spark.ComplexFutureAction
 
compressed() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Returns a matrix in dense column major, dense row major, sparse row major, or sparse column major format, whichever uses less storage.
compressed() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Returns a vector in either dense or sparse format, whichever uses less storage.
compressed() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Returns a vector in either dense or sparse format, whichever uses less storage.
compressedColMajor() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Returns a matrix in dense or sparse column major format, whichever uses less storage.
compressedContinuousInputStream(InputStream) - 接口 中的方法org.apache.spark.io.CompressionCodec
 
compressedContinuousInputStream(InputStream) - 类 中的方法org.apache.spark.io.ZStdCompressionCodec
 
compressedContinuousOutputStream(OutputStream) - 接口 中的方法org.apache.spark.io.CompressionCodec
 
compressedInputStream(InputStream) - 接口 中的方法org.apache.spark.io.CompressionCodec
 
compressedInputStream(InputStream) - 类 中的方法org.apache.spark.io.LZ4CompressionCodec
 
compressedInputStream(InputStream) - 类 中的方法org.apache.spark.io.LZFCompressionCodec
 
compressedInputStream(InputStream) - 类 中的方法org.apache.spark.io.SnappyCompressionCodec
 
compressedInputStream(InputStream) - 类 中的方法org.apache.spark.io.ZStdCompressionCodec
 
compressedOutputStream(OutputStream) - 接口 中的方法org.apache.spark.io.CompressionCodec
 
compressedOutputStream(OutputStream) - 类 中的方法org.apache.spark.io.LZ4CompressionCodec
 
compressedOutputStream(OutputStream) - 类 中的方法org.apache.spark.io.LZFCompressionCodec
 
compressedOutputStream(OutputStream) - 类 中的方法org.apache.spark.io.SnappyCompressionCodec
 
compressedOutputStream(OutputStream) - 类 中的方法org.apache.spark.io.ZStdCompressionCodec
 
compressedRowMajor() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Returns a matrix in dense or sparse row major format, whichever uses less storage.
CompressionCodec - org.apache.spark.io中的接口
:: DeveloperApi :: CompressionCodec allows the customization of choosing different compression implementations to be used in block storage.
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.api.r.BaseRRDD
 
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.graphx.EdgeRDD
 
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.graphx.VertexRDD
Provides the RDD[(VertexId, VD)] equivalent output.
compute(Vector, double, Vector) - 类 中的方法org.apache.spark.mllib.optimization.Gradient
Compute the gradient and loss given the features of a single data point.
compute(Vector, double, Vector, Vector) - 类 中的方法org.apache.spark.mllib.optimization.Gradient
Compute the gradient and loss given the features of a single data point, add the gradient to a provided vector to avoid creating new objects, and return loss.
compute(Vector, double, Vector) - 类 中的方法org.apache.spark.mllib.optimization.HingeGradient
 
compute(Vector, double, Vector, Vector) - 类 中的方法org.apache.spark.mllib.optimization.HingeGradient
 
compute(Vector, Vector, double, int, double) - 类 中的方法org.apache.spark.mllib.optimization.L1Updater
 
compute(Vector, double, Vector) - 类 中的方法org.apache.spark.mllib.optimization.LeastSquaresGradient
 
compute(Vector, double, Vector, Vector) - 类 中的方法org.apache.spark.mllib.optimization.LeastSquaresGradient
 
compute(Vector, double, Vector, Vector) - 类 中的方法org.apache.spark.mllib.optimization.LogisticGradient
 
compute(Vector, Vector, double, int, double) - 类 中的方法org.apache.spark.mllib.optimization.SimpleUpdater
 
compute(Vector, Vector, double, int, double) - 类 中的方法org.apache.spark.mllib.optimization.SquaredL2Updater
 
compute(Vector, Vector, double, int, double) - 类 中的方法org.apache.spark.mllib.optimization.Updater
Compute an updated value for weights given the gradient, stepSize, iteration number and regularization parameter.
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.rdd.CoGroupedRDD
 
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.rdd.HadoopRDD
 
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.rdd.JdbcRDD
 
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.rdd.NewHadoopRDD
 
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.rdd.PartitionPruningRDD
 
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.rdd.RDD
:: DeveloperApi :: Implemented by subclasses to compute a given partition.
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.rdd.ShuffledRDD
 
compute(Partition, TaskContext) - 类 中的方法org.apache.spark.rdd.UnionRDD
 
compute(Time) - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
Generate an RDD for the given duration
compute(Time) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Method that generates an RDD for the given Duration
compute(Time) - 类 中的方法org.apache.spark.streaming.dstream.ConstantInputDStream
 
compute(Time) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Method that generates an RDD for the given time
compute(Time) - 类 中的方法org.apache.spark.streaming.dstream.ReceiverInputDStream
 
compute(long, long, long, long) - 接口 中的方法org.apache.spark.streaming.scheduler.rate.RateEstimator
Computes the number of records the stream attached to this RateEstimator should ingest per second, given an update on the size and completion times of the latest batch.
computeClusterStats(Dataset<Row>, String, String) - 类 中的静态方法org.apache.spark.ml.evaluation.CosineSilhouette
The method takes the input dataset and computes the aggregated values about a cluster which are needed by the algorithm.
computeClusterStats(Dataset<Row>, String, String) - 类 中的静态方法org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
The method takes the input dataset and computes the aggregated values about a cluster which are needed by the algorithm.
computeColumnSummaryStatistics() - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes column-wise summary statistics.
computeCorrelation(RDD<Object>, RDD<Object>) - 接口 中的方法org.apache.spark.mllib.stat.correlation.Correlation
Compute correlation for two datasets.
computeCorrelation(RDD<Object>, RDD<Object>) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.PearsonCorrelation
Compute the Pearson correlation for two datasets.
computeCorrelation(RDD<Object>, RDD<Object>) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
Compute Spearman's correlation for two datasets.
computeCorrelationMatrix(RDD<Vector>) - 接口 中的方法org.apache.spark.mllib.stat.correlation.Correlation
Compute the correlation matrix S, for the input matrix, where S(i, j) is the correlation between column i and j.
computeCorrelationMatrix(RDD<Vector>) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.PearsonCorrelation
Compute the Pearson correlation matrix S, for the input matrix, where S(i, j) is the correlation between column i and j. 0 covariance results in a correlation value of Double.NaN.
computeCorrelationMatrix(RDD<Vector>) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
Compute Spearman's correlation matrix S, for the input matrix, where S(i, j) is the correlation between column i and j.
computeCorrelationMatrixFromCovariance(Matrix) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.PearsonCorrelation
Compute the Pearson correlation matrix from the covariance matrix. 0 variance results in a correlation value of Double.NaN.
computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - 接口 中的方法org.apache.spark.mllib.stat.correlation.Correlation
Combine the two input RDD[Double]s into an RDD[Vector] and compute the correlation using the correlation implementation for RDD[Vector].
computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.PearsonCorrelation
 
computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
 
computeCost(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
已过时。
This method is deprecated and will be removed in future versions. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
computeCost(Vector) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
Computes the squared distance between the input point and the cluster center it belongs to.
computeCost(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
Computes the sum of squared distances between the input points and their corresponding cluster centers.
computeCost(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
Java-friendly version of computeCost().
computeCost(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel
Return the K-means cost (sum of squared distances of points to their nearest center) for this model on the given data.
computeCovariance() - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the covariance matrix, treating each row as an observation.
computeError(org.apache.spark.mllib.tree.model.TreeEnsembleModel, RDD<LabeledPoint>) - 接口 中的方法org.apache.spark.mllib.tree.loss.Loss
Method to calculate error of the base learner for the gradient boosting calculation.
computeError(double, double) - 接口 中的方法org.apache.spark.mllib.tree.loss.Loss
Method to calculate loss when the predictions are already known.
computeFractionForSampleSize(int, long, boolean) - 类 中的静态方法org.apache.spark.util.random.SamplingUtils
Returns a sampling rate that guarantees a sample of size greater than or equal to sampleSizeLowerBound 99.99% of the time.
computeGradient(DenseMatrix<Object>, DenseMatrix<Object>, Vector, int) - 接口 中的方法org.apache.spark.ml.ann.TopologyModel
Computes gradient for the network
computeGramianMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Computes the Gramian matrix A^T A.
computeGramianMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the Gramian matrix A^T A.
computeInitialPredictionAndError(RDD<org.apache.spark.ml.feature.Instance>, double, DecisionTreeRegressionModel, Loss) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
Compute the initial predictions and errors for a dataset for the first iteration of gradient boosting.
computeInitialPredictionAndError(RDD<LabeledPoint>, double, DecisionTreeModel, Loss) - 类 中的静态方法org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
:: DeveloperApi :: Compute the initial predictions and errors for a dataset for the first iteration of gradient boosting.
computePreferredLocations(Seq<InputFormatInfo>) - 类 中的静态方法org.apache.spark.scheduler.InputFormatInfo
Computes the preferred locations based on input(s) and returned a location to block map.
computePrevDelta(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>) - 接口 中的方法org.apache.spark.ml.ann.LayerModel
Computes the delta for back propagation.
computePrincipalComponents(int) - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the top k principal components only.
computePrincipalComponentsAndExplainedVariance(int) - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the top k principal components and a vector of proportions of variance explained by each principal component.
computeProbability(double) - 接口 中的方法org.apache.spark.mllib.tree.loss.ClassificationLoss
Computes the class probability given the margin.
computeSilhouetteCoefficient(Broadcast<Map<Object, Tuple2<Vector, Object>>>, Vector, double) - 类 中的静态方法org.apache.spark.ml.evaluation.CosineSilhouette
It computes the Silhouette coefficient for a point.
computeSilhouetteCoefficient(Broadcast<Map<Object, SquaredEuclideanSilhouette.ClusterStats>>, Vector, double, double) - 类 中的静态方法org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
It computes the Silhouette coefficient for a point.
computeSilhouetteScore(Dataset<?>, String, String) - 类 中的静态方法org.apache.spark.ml.evaluation.CosineSilhouette
Compute the Silhouette score of the dataset using the cosine distance measure.
computeSilhouetteScore(Dataset<?>, String, String) - 类 中的静态方法org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
Compute the Silhouette score of the dataset using squared Euclidean distance measure.
computeSVD(int, boolean, double) - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Computes the singular value decomposition of this IndexedRowMatrix.
computeSVD(int, boolean, double) - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes singular value decomposition of this matrix.
computeThresholdByKey(Map<K, AcceptanceResult>, Map<K, Object>) - 类 中的静态方法org.apache.spark.util.random.StratifiedSamplingUtils
Given the result returned by getCounts, determine the threshold for accepting items to generate exact sample size.
computeWeightedError(RDD<org.apache.spark.ml.feature.Instance>, DecisionTreeRegressionModel[], double[], Loss) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
Method to calculate error of the base learner for the gradient boosting calculation.
computeWeightedError(RDD<org.apache.spark.ml.feature.Instance>, RDD<Tuple2<Object, Object>>) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
Method to calculate error of the base learner for the gradient boosting calculation.
concat(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Concatenates multiple input columns together into a single column.
concat(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Concatenates multiple input columns together into a single column.
concat_ws(String, Column...) - 类 中的静态方法org.apache.spark.sql.functions
Concatenates multiple input string columns together into a single string column, using the given separator.
concat_ws(String, Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Concatenates multiple input string columns together into a single string column, using the given separator.
Conf(int, int, double, double, double, double, double, double) - 类 的构造器org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
conf() - 类 中的方法org.apache.spark.SparkEnv
 
conf() - 类 中的方法org.apache.spark.sql.hive.RelationConversions
 
conf() - 类 中的方法org.apache.spark.sql.SparkSession
 
confidence() - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules.Rule
Returns the confidence of the rule.
confidence() - 类 中的方法org.apache.spark.partial.BoundedDouble
 
confidence() - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Returns the confidence (or delta) of this CountMinSketch.
config(String, String) - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Sets a config option.
config(String, long) - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Sets a config option.
config(String, double) - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Sets a config option.
config(String, boolean) - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Sets a config option.
config(SparkConf) - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Sets a list of config options based on the given SparkConf.
ConfigEntryWithDefault<T> - org.apache.spark.internal.config中的类
 
ConfigEntryWithDefault(String, Option<String>, String, List<String>, T, Function1<String, T>, Function1<T, String>, String, boolean) - 类 的构造器org.apache.spark.internal.config.ConfigEntryWithDefault
 
ConfigEntryWithDefaultFunction<T> - org.apache.spark.internal.config中的类
 
ConfigEntryWithDefaultFunction(String, Option<String>, String, List<String>, Function0<T>, Function1<String, T>, Function1<T, String>, String, boolean) - 类 的构造器org.apache.spark.internal.config.ConfigEntryWithDefaultFunction
 
ConfigEntryWithDefaultString<T> - org.apache.spark.internal.config中的类
 
ConfigEntryWithDefaultString(String, Option<String>, String, List<String>, String, Function1<String, T>, Function1<T, String>, String, boolean) - 类 的构造器org.apache.spark.internal.config.ConfigEntryWithDefaultString
 
ConfigHelpers - org.apache.spark.internal.config中的类
 
ConfigHelpers() - 类 的构造器org.apache.spark.internal.config.ConfigHelpers
 
ConfigProvider - org.apache.spark.internal.config中的接口
A source of configuration values.
configTestLog4j(String) - 类 中的静态方法org.apache.spark.TestUtils
config a log4j properties used for testsuite
Configurable - org.apache.spark.input中的接口
A trait to implement Configurable interface.
configuration() - 类 中的方法org.apache.spark.scheduler.InputFormatInfo
 
CONFIGURATION_INSTANTIATION_LOCK() - 类 中的静态方法org.apache.spark.rdd.HadoopRDD
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).
CONFIGURATION_INSTANTIATION_LOCK() - 类 中的静态方法org.apache.spark.rdd.NewHadoopRDD
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).
configureJobPropertiesForStorageHandler(TableDesc, Configuration, boolean) - 类 中的静态方法org.apache.spark.sql.hive.HiveTableUtil
 
confusionMatrix() - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns confusion matrix: predicted classes are in columns, they are ordered by class label ascending, as in "labels"
connectedComponents() - 类 中的方法org.apache.spark.graphx.GraphOps
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
connectedComponents(int) - 类 中的方法org.apache.spark.graphx.GraphOps
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
ConnectedComponents - org.apache.spark.graphx.lib中的类
Connected components algorithm.
ConnectedComponents() - 类 的构造器org.apache.spark.graphx.lib.ConnectedComponents
 
consequent() - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules.Rule
 
ConstantInputDStream<T> - org.apache.spark.streaming.dstream中的类
An input stream that always returns the same RDD on each time step.
ConstantInputDStream(StreamingContext, RDD<T>, ClassTag<T>) - 类 的构造器org.apache.spark.streaming.dstream.ConstantInputDStream
 
constructTree(org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData[]) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
Given a list of nodes from a tree, construct the tree.
constructTrees(RDD<org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData>) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
 
contains(Param<?>) - 类 中的方法org.apache.spark.ml.param.ParamMap
Checks whether a parameter is explicitly specified.
contains(String) - 类 中的方法org.apache.spark.SparkConf
Does the configuration contain a given parameter?
contains(Object) - 类 中的方法org.apache.spark.sql.Column
Contains the other element.
contains(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Tests whether this Metadata contains a binding for a key.
containsDelimiters() - 类 中的方法org.apache.spark.sql.hive.execution.HiveOptions
 
containsKey(Object) - 类 中的方法org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
 
containsKey(Object) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
containsNull() - 类 中的方法org.apache.spark.sql.types.ArrayType
 
containsValue(Object) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
contentType() - 类 中的方法org.apache.spark.ui.JettyUtils.ServletParams
 
context() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
The SparkContext that this RDD was created on.
context() - 类 中的方法org.apache.spark.InterruptibleIterator
 
context() - 类 中的方法org.apache.spark.rdd.RDD
The SparkContext that this RDD was created on.
context() - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return the StreamingContext associated with this DStream
context() - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return the StreamingContext associated with this DStream
ContextBarrierId - org.apache.spark中的类
For each barrier stage attempt, only at most one barrier() call can be active at any time, thus we can use (stageId, stageAttemptId) to identify the stage attempt where the barrier() call is from.
ContextBarrierId(int, int) - 类 的构造器org.apache.spark.ContextBarrierId
 
Continuous() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.FeatureType
 
Continuous(long) - 类 中的静态方法org.apache.spark.sql.streaming.Trigger
A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval.
Continuous(long, TimeUnit) - 类 中的静态方法org.apache.spark.sql.streaming.Trigger
A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval. {{{ import java.util.concurrent.TimeUnit df.writeStream.trigger(Trigger.Continuous(10, TimeUnit.SECONDS)) }}}
Continuous(Duration) - 类 中的静态方法org.apache.spark.sql.streaming.Trigger
(Scala-friendly) A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval. {{{ import scala.concurrent.duration._ df.writeStream.trigger(Trigger.Continuous(10.seconds)) }}}
Continuous(String) - 类 中的静态方法org.apache.spark.sql.streaming.Trigger
A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval. {{{ df.writeStream.trigger(Trigger.Continuous("10 seconds")) }}}
ContinuousPartitionReader<T> - org.apache.spark.sql.connector.read.streaming中的接口
A variation on PartitionReader for use with continuous streaming processing.
ContinuousPartitionReaderFactory - org.apache.spark.sql.connector.read.streaming中的接口
A variation on PartitionReaderFactory that returns ContinuousPartitionReader instead of PartitionReader.
ContinuousSplit - org.apache.spark.ml.tree中的类
Split which tests a continuous feature.
ContinuousStream - org.apache.spark.sql.connector.read.streaming中的接口
A SparkDataStream for streaming queries with continuous mode.
conv(Column, int, int) - 类 中的静态方法org.apache.spark.sql.functions
Convert a number in a string column from one base to another.
CONVERT_INSERTING_PARTITIONED_TABLE() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
CONVERT_METASTORE_CTAS() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
CONVERT_METASTORE_ORC() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
CONVERT_METASTORE_PARQUET() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
CONVERT_METASTORE_PARQUET_WITH_SCHEMA_MERGING() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
convertibleFilters(StructType, Map<String, DataType>, Seq<Filter>) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFilters
 
convertMatrixColumnsFromML(Dataset<?>, String...) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Converts matrix columns in an input Dataset to the Matrix type from the new Matrix type under the spark.ml package.
convertMatrixColumnsFromML(Dataset<?>, Seq<String>) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Converts matrix columns in an input Dataset to the Matrix type from the new Matrix type under the spark.ml package.
convertMatrixColumnsToML(Dataset<?>, String...) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Converts Matrix columns in an input Dataset from the Matrix type to the new Matrix type under the spark.ml package.
convertMatrixColumnsToML(Dataset<?>, Seq<String>) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Converts Matrix columns in an input Dataset from the Matrix type to the new Matrix type under the spark.ml package.
convertTableProperties(Map<String, String>, Map<String, String>, Option<String>, Option<String>, String) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
 
convertToCanonicalEdges(Function2<ED, ED, ED>) - 类 中的方法org.apache.spark.graphx.GraphOps
Convert bi-directional edges into uni-directional ones.
convertToOldLossType(String) - 接口 中的方法org.apache.spark.ml.tree.GBTRegressorParams
 
convertToTimeUnit(long, TimeUnit) - 类 中的静态方法org.apache.spark.streaming.ui.UIUtils
Convert milliseconds to the specified unit.
convertVectorColumnsFromML(Dataset<?>, String...) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Converts vector columns in an input Dataset to the Vector type from the new Vector type under the spark.ml package.
convertVectorColumnsFromML(Dataset<?>, Seq<String>) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Converts vector columns in an input Dataset to the Vector type from the new Vector type under the spark.ml package.
convertVectorColumnsToML(Dataset<?>, String...) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Converts vector columns in an input Dataset from the Vector type to the new Vector type under the spark.ml package.
convertVectorColumnsToML(Dataset<?>, Seq<String>) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Converts vector columns in an input Dataset from the Vector type to the new Vector type under the spark.ml package.
CoordinateMatrix - org.apache.spark.mllib.linalg.distributed中的类
Represents a matrix in coordinate format.
CoordinateMatrix(RDD<MatrixEntry>, long, long) - 类 的构造器org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
 
CoordinateMatrix(RDD<MatrixEntry>) - 类 的构造器org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
Alternative constructor leaving matrix dimensions to be determined automatically.
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.NaiveBayes
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.DistributedLDAModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.LocalLDAModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.Estimator
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.evaluation.Evaluator
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.ColumnPruner
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.IDF
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.Imputer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.Interaction
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScaler
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.MinHashLSH
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.MinHashLSHModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.PCA
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.PolynomialExpansion
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.RFormula
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.SQLTransformer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.Tokenizer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.VectorAttributeRewriter
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
copy(Vector, Vector) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
y = x
copy() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
copy() - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
copy() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Get a deep copy of the matrix.
copy() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
copy() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
copy() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Makes a deep copy of this vector.
copy(ParamMap) - 类 中的方法org.apache.spark.ml.Model
 
copy() - 类 中的方法org.apache.spark.ml.param.ParamMap
Creates a copy of this param map.
copy(ParamMap) - 接口 中的方法org.apache.spark.ml.param.Params
Creates a copy of this instance with the same UID and some extra params.
copy(ParamMap) - 类 中的方法org.apache.spark.ml.Pipeline
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.PipelineModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.PipelineStage
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.Predictor
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.Transformer
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
copy(ParamMap) - 类 中的方法org.apache.spark.ml.UnaryTransformer
 
copy(Vector, Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
y = x
copy() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
copy() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
copy() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Get a deep copy of the matrix.
copy() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
copy() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
copy() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Makes a deep copy of this vector.
copy() - 类 中的方法org.apache.spark.mllib.random.ExponentialGenerator
 
copy() - 类 中的方法org.apache.spark.mllib.random.GammaGenerator
 
copy() - 类 中的方法org.apache.spark.mllib.random.LogNormalGenerator
 
copy() - 类 中的方法org.apache.spark.mllib.random.PoissonGenerator
 
copy() - 接口 中的方法org.apache.spark.mllib.random.RandomDataGenerator
Returns a copy of the RandomDataGenerator with a new instance of the rng object used in the class when applicable for non-locking concurrent usage.
copy() - 类 中的方法org.apache.spark.mllib.random.StandardNormalGenerator
 
copy() - 类 中的方法org.apache.spark.mllib.random.UniformGenerator
 
copy() - 类 中的方法org.apache.spark.mllib.random.WeibullGenerator
 
copy() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
Returns a shallow copy of this instance.
copy() - 接口 中的方法org.apache.spark.sql.Row
Make a copy of the current Row object.
copy() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysFalse
 
copy() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysTrue
 
copy() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
copy() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarMap
 
copy() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
Revisit this.
copy() - 类 中的方法org.apache.spark.util.AccumulatorV2
Creates a new copy of this accumulator.
copy() - 类 中的方法org.apache.spark.util.CollectionAccumulator
 
copy() - 类 中的方法org.apache.spark.util.DoubleAccumulator
 
copy() - 类 中的方法org.apache.spark.util.LongAccumulator
 
copy() - 类 中的方法org.apache.spark.util.StatCounter
Clone this StatCounter
copyAndReset() - 类 中的方法org.apache.spark.util.AccumulatorV2
Creates a new copy of this accumulator, which is zero value. i.e. call isZero on the copy must return true.
copyAndReset() - 类 中的方法org.apache.spark.util.CollectionAccumulator
 
copyFileStreamNIO(FileChannel, WritableByteChannel, long, long) - 类 中的静态方法org.apache.spark.util.Utils
 
copyStream(InputStream, OutputStream, boolean, boolean) - 类 中的静态方法org.apache.spark.util.Utils
Copy all data from an InputStream to an OutputStream.
copyStreamUpTo(InputStream, long) - 类 中的静态方法org.apache.spark.util.Utils
Copy the first maxSize bytes of data from the InputStream to an in-memory buffer, primarily to check for corruption.
copyValues(T, ParamMap) - 接口 中的方法org.apache.spark.ml.param.Params
Copies param values from this instance to another instance for params shared by them.
cores() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
 
coresGranted() - 类 中的方法org.apache.spark.status.api.v1.ApplicationInfo
 
coresPerExecutor() - 类 中的方法org.apache.spark.status.api.v1.ApplicationInfo
 
corr(Dataset<?>, String, String) - 类 中的静态方法org.apache.spark.ml.stat.Correlation
Compute the correlation matrix for the input Dataset of Vectors using the specified method.
corr(Dataset<?>, String) - 类 中的静态方法org.apache.spark.ml.stat.Correlation
Compute the Pearson correlation matrix for the input Dataset of Vectors.
corr(RDD<Object>, RDD<Object>, String) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.Correlations
 
corr(RDD<Vector>) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Compute the Pearson correlation matrix for the input RDD of Vectors.
corr(RDD<Vector>, String) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Compute the correlation matrix for the input RDD of Vectors using the specified method.
corr(RDD<Object>, RDD<Object>) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Compute the Pearson correlation for the input RDDs.
corr(JavaRDD<Double>, JavaRDD<Double>) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Java-friendly version of corr()
corr(RDD<Object>, RDD<Object>, String) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Compute the correlation for the input RDDs using the specified method.
corr(JavaRDD<Double>, JavaRDD<Double>, String) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Java-friendly version of corr()
corr(String, String, String) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Calculates the correlation of two columns of a DataFrame.
corr(String, String) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Calculates the Pearson Correlation Coefficient of two columns of a DataFrame.
corr(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the Pearson Correlation Coefficient for two columns.
corr(String, String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the Pearson Correlation Coefficient for two columns.
Correlation - org.apache.spark.ml.stat中的类
API for correlation functions in MLlib, compatible with DataFrames and Datasets.
Correlation() - 类 的构造器org.apache.spark.ml.stat.Correlation
 
Correlation - org.apache.spark.mllib.stat.correlation中的接口
Trait for correlation algorithms.
CorrelationNames - org.apache.spark.mllib.stat.correlation中的类
Maintains supported and default correlation names.
CorrelationNames() - 类 的构造器org.apache.spark.mllib.stat.correlation.CorrelationNames
 
Correlations - org.apache.spark.mllib.stat.correlation中的类
Delegates computation to the specific correlation object based on the input method name.
Correlations() - 类 的构造器org.apache.spark.mllib.stat.correlation.Correlations
 
corrMatrix(RDD<Vector>, String) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.Correlations
 
cos(Column) - 类 中的静态方法org.apache.spark.sql.functions
 
cos(String) - 类 中的静态方法org.apache.spark.sql.functions
 
cosh(Column) - 类 中的静态方法org.apache.spark.sql.functions
 
cosh(String) - 类 中的静态方法org.apache.spark.sql.functions
 
CosineSilhouette - org.apache.spark.ml.evaluation中的类
The algorithm which is implemented in this object, instead, is an efficient and parallel implementation of the Silhouette using the cosine distance measure.
CosineSilhouette() - 类 的构造器org.apache.spark.ml.evaluation.CosineSilhouette
 
count() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return the number of elements in the RDD.
count() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
The number of edges in the RDD.
count() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
The number of vertices in the RDD.
count() - 类 中的方法org.apache.spark.ml.clustering.ExpectationAggregator
 
count() - 类 中的方法org.apache.spark.ml.regression.AFTAggregator
 
count(Column, Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
count(Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
count() - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Sample size.
count() - 接口 中的方法org.apache.spark.mllib.stat.MultivariateStatisticalSummary
Sample size.
count() - 类 中的方法org.apache.spark.rdd.RDD
Return the number of elements in the RDD.
count() - 类 中的方法org.apache.spark.sql.Dataset
Returns the number of rows in the Dataset.
count(MapFunction<T, Object>) - 类 中的静态方法org.apache.spark.sql.expressions.javalang.typed
已过时。
Count aggregate function.
count(Function1<IN, Object>) - 类 中的静态方法org.apache.spark.sql.expressions.scalalang.typed
已过时。
Count aggregate function.
count(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the number of items in a group.
count(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the number of items in a group.
count() - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Returns a Dataset that contains a tuple with each key and the number of items present for that key.
count() - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Count the number of rows for each group.
count() - 类 中的方法org.apache.spark.status.RDDPartitionSeq
 
count() - 类 中的方法org.apache.spark.storage.ReadableChannelFileRegion
 
count() - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream.
count() - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream.
count() - 类 中的方法org.apache.spark.util.DoubleAccumulator
Returns the number of elements added to the accumulator.
count() - 类 中的方法org.apache.spark.util.LongAccumulator
Returns the number of elements added to the accumulator.
count() - 类 中的方法org.apache.spark.util.StatCounter
 
countApprox(long, double) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
countApprox(long) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
countApprox(long, double) - 类 中的方法org.apache.spark.rdd.RDD
Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
countApproxDistinct(double) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return approximate number of distinct elements in the RDD.
countApproxDistinct(int, int) - 类 中的方法org.apache.spark.rdd.RDD
Return approximate number of distinct elements in the RDD.
countApproxDistinct(double) - 类 中的方法org.apache.spark.rdd.RDD
Return approximate number of distinct elements in the RDD.
countApproxDistinctByKey(double, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(double, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(double) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(int, int, Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(double, Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(double, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(double) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return approximate number of distinct values for each key in this RDD.
countAsync() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
The asynchronous version of count, which returns a future for counting the number of elements in this RDD.
countAsync() - 类 中的方法org.apache.spark.rdd.AsyncRDDActions
Returns a future for counting the number of elements in the RDD.
countByKey() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Count the number of elements for each key, and return the result to the master as a Map.
countByKey() - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Count the number of elements for each key, collecting the results to a local Map.
countByKeyApprox(long) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
countByKeyApprox(long, double) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
countByKeyApprox(long, double) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
countByValue() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return the count of each unique value in this RDD as a map of (value, count) pairs.
countByValue(Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Return the count of each unique value in this RDD as a local map of (value, count) pairs.
countByValue() - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.
countByValue(int) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.
countByValue(int, Ordering<T>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.
countByValueAndWindow(Duration, Duration) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
countByValueAndWindow(Duration, Duration, int) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
countByValueAndWindow(Duration, Duration, int, Ordering<T>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
countByValueApprox(long, double) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Approximate version of countByValue().
countByValueApprox(long) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Approximate version of countByValue().
countByValueApprox(long, double, Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Approximate version of countByValue().
countByWindow(Duration, Duration) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD has a single element generated by counting the number of elements in a window over this DStream. windowDuration and slideDuration are as defined in the window() operation.
countByWindow(Duration, Duration) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD has a single element generated by counting the number of elements in a sliding window over this DStream.
countDistinct(Column, Column...) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the number of distinct items in a group.
countDistinct(String, String...) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the number of distinct items in a group.
countDistinct(Column, Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the number of distinct items in a group.
countDistinct(String, Seq<String>) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the number of distinct items in a group.
COUNTER() - 类 中的静态方法org.apache.spark.metrics.sink.StatsdMetricType
 
CountingWritableChannel - org.apache.spark.storage中的类
 
CountingWritableChannel(WritableByteChannel) - 类 的构造器org.apache.spark.storage.CountingWritableChannel
 
countMinSketch(String, int, int, int) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Builds a Count-min Sketch over a specified column.
countMinSketch(String, double, double, int) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Builds a Count-min Sketch over a specified column.
countMinSketch(Column, int, int, int) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Builds a Count-min Sketch over a specified column.
countMinSketch(Column, double, double, int) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Builds a Count-min Sketch over a specified column.
CountMinSketch - org.apache.spark.util.sketch中的类
A Count-min sketch is a probabilistic data structure used for cardinality estimation using sub-linear space.
CountMinSketch() - 类 的构造器org.apache.spark.util.sketch.CountMinSketch
 
CountMinSketch.Version - org.apache.spark.util.sketch中的枚举
 
countTowardsTaskFailures() - 类 中的方法org.apache.spark.ExecutorLostFailure
 
countTowardsTaskFailures() - 类 中的方法org.apache.spark.FetchFailed
Fetch failures lead to a different failure handling path: (1) we don't abort the stage after 4 task failures, instead we immediately go back to the stage which generated the map output, and regenerate the missing data
countTowardsTaskFailures() - 类 中的静态方法org.apache.spark.Resubmitted
 
countTowardsTaskFailures() - 类 中的方法org.apache.spark.TaskCommitDenied
If a task failed because its attempt to commit was denied, do not count this failure towards failing the stage.
countTowardsTaskFailures() - 接口 中的方法org.apache.spark.TaskFailedReason
Whether this task failure should be counted towards the maximum number of times the task is allowed to fail before the stage is aborted.
countTowardsTaskFailures() - 类 中的方法org.apache.spark.TaskKilled
 
countTowardsTaskFailures() - 类 中的静态方法org.apache.spark.TaskResultLost
 
countTowardsTaskFailures() - 类 中的静态方法org.apache.spark.UnknownReason
 
CountVectorizer - org.apache.spark.ml.feature中的类
Extracts a vocabulary from document collections and generates a CountVectorizerModel.
CountVectorizer(String) - 类 的构造器org.apache.spark.ml.feature.CountVectorizer
 
CountVectorizer() - 类 的构造器org.apache.spark.ml.feature.CountVectorizer
 
CountVectorizerModel - org.apache.spark.ml.feature中的类
Converts a text document to a sparse vector of token counts.
CountVectorizerModel(String, String[]) - 类 的构造器org.apache.spark.ml.feature.CountVectorizerModel
 
CountVectorizerModel(String[]) - 类 的构造器org.apache.spark.ml.feature.CountVectorizerModel
 
CountVectorizerParams - org.apache.spark.ml.feature中的接口
cov() - 类 中的方法org.apache.spark.ml.stat.distribution.MultivariateGaussian
 
cov(String, String) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Calculate the sample covariance of two numerical columns of a DataFrame.
covar_pop(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the population covariance for two columns.
covar_pop(String, String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the population covariance for two columns.
covar_samp(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the sample covariance for two columns.
covar_samp(String, String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the sample covariance for two columns.
covs() - 类 中的方法org.apache.spark.ml.clustering.ExpectationAggregator
 
crc32(Column) - 类 中的静态方法org.apache.spark.sql.functions
Calculates the cyclic redundancy check value (CRC32) of a binary column and returns the value as a bigint.
CreatableRelationProvider - org.apache.spark.sql.sources中的接口
 
create(boolean, boolean, boolean, boolean, int) - 类 中的静态方法org.apache.spark.api.java.StorageLevels
Create a new StorageLevel object.
create(JavaSparkContext, JdbcRDD.ConnectionFactory, String, long, long, int, Function<ResultSet, T>) - 类 中的静态方法org.apache.spark.rdd.JdbcRDD
Create an RDD that executes a SQL query on a JDBC connection and reads results.
create(JavaSparkContext, JdbcRDD.ConnectionFactory, String, long, long, int) - 类 中的静态方法org.apache.spark.rdd.JdbcRDD
Create an RDD that executes a SQL query on a JDBC connection and reads results.
create(RDD<T>, Function1<Object, Object>) - 类 中的静态方法org.apache.spark.rdd.PartitionPruningRDD
Create a PartitionPruningRDD.
create(RpcEnvConfig) - 接口 中的方法org.apache.spark.rpc.RpcEnvFactory
 
create() - 接口 中的方法org.apache.spark.sql.CreateTableWriter
Create a new table from the contents of the data frame.
create() - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
create(Object...) - 类 中的静态方法org.apache.spark.sql.RowFactory
Create a Row from the given arguments.
create(long) - 类 中的静态方法org.apache.spark.util.sketch.BloomFilter
Creates a BloomFilter with the expected number of insertions and a default expected false positive probability of 3%.
create(long, double) - 类 中的静态方法org.apache.spark.util.sketch.BloomFilter
Creates a BloomFilter with the expected number of insertions and expected false positive probability.
create(long, long) - 类 中的静态方法org.apache.spark.util.sketch.BloomFilter
Creates a BloomFilter with given expectedNumItems and numBits, it will pick an optimal numHashFunctions which can minimize fpp for the bloom filter.
create(int, int, int) - 类 中的静态方法org.apache.spark.util.sketch.CountMinSketch
Creates a CountMinSketch with given depth, width, and random seed.
create(double, double, int) - 类 中的静态方法org.apache.spark.util.sketch.CountMinSketch
Creates a CountMinSketch with given relative error (eps), confidence, and random seed.
createAlterTable(Seq<String>, CatalogPlugin, Seq<String>, Seq<TableChange>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
 
createArrayType(Column) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
createArrayType(DataType) - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates an ArrayType by specifying the data type of elements (elementType).
createArrayType(DataType, boolean) - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates an ArrayType by specifying the data type of elements (elementType) and whether the array contains null values (containsNull).
createAttrGroupForAttrNames(String, int, boolean, boolean) - 类 中的静态方法org.apache.spark.ml.feature.OneHotEncoderCommon
Creates an `AttributeGroup` with the required number of `BinaryAttribute`.
createBatchWriterFactory() - 接口 中的方法org.apache.spark.sql.connector.write.BatchWrite
Creates a writer factory which will be serialized and sent to executors.
createColumnarReader(InputPartition) - 接口 中的方法org.apache.spark.sql.connector.read.PartitionReaderFactory
Returns a columnar partition reader to read data from the given InputPartition.
createColumnarReader(InputPartition) - 接口 中的方法org.apache.spark.sql.connector.read.streaming.ContinuousPartitionReaderFactory
 
createCombiner() - 类 中的方法org.apache.spark.Aggregator
 
createCommitter(int) - 类 中的方法org.apache.spark.internal.io.HadoopWriteConfigUtil
 
createCompiledClass(String, File, TestUtils.JavaSourceFromString, Seq<URL>) - 类 中的静态方法org.apache.spark.TestUtils
Creates a compiled class with the source file.
createCompiledClass(String, File, String, String, Seq<URL>) - 类 中的静态方法org.apache.spark.TestUtils
Creates a compiled class with the given name.
createContinuousReaderFactory() - 接口 中的方法org.apache.spark.sql.connector.read.streaming.ContinuousStream
Returns a factory to create a ContinuousPartitionReader for each InputPartition.
createCryptoInputStream(InputStream, SparkConf, byte[]) - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
Helper method to wrap InputStream with CryptoInputStream for decryption.
createCryptoOutputStream(OutputStream, SparkConf, byte[]) - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
Helper method to wrap OutputStream with CryptoOutputStream for encryption.
createDatabase(CatalogDatabase, boolean) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Creates a new database with the given name.
createDataFrame(RDD<A>, TypeTags.TypeTag<A>) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).
createDataFrame(Seq<A>, TypeTags.TypeTag<A>) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a DataFrame from a local Seq of Product.
createDataFrame(RDD<Row>, StructType) - 类 中的方法org.apache.spark.sql.SparkSession
:: DeveloperApi :: Creates a DataFrame from an RDD containing Rows using the given schema.
createDataFrame(JavaRDD<Row>, StructType) - 类 中的方法org.apache.spark.sql.SparkSession
:: DeveloperApi :: Creates a DataFrame from a JavaRDD containing Rows using the given schema.
createDataFrame(List<Row>, StructType) - 类 中的方法org.apache.spark.sql.SparkSession
:: DeveloperApi :: Creates a DataFrame from a java.util.List containing Rows using the given schema.
createDataFrame(RDD<?>, Class<?>) - 类 中的方法org.apache.spark.sql.SparkSession
Applies a schema to an RDD of Java Beans.
createDataFrame(JavaRDD<?>, Class<?>) - 类 中的方法org.apache.spark.sql.SparkSession
Applies a schema to an RDD of Java Beans.
createDataFrame(List<?>, Class<?>) - 类 中的方法org.apache.spark.sql.SparkSession
Applies a schema to a List of Java Beans.
createDataFrame(RDD<A>, TypeTags.TypeTag<A>) - 类 中的方法org.apache.spark.sql.SQLContext
Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).
createDataFrame(Seq<A>, TypeTags.TypeTag<A>) - 类 中的方法org.apache.spark.sql.SQLContext
Creates a DataFrame from a local Seq of Product.
createDataFrame(RDD<Row>, StructType) - 类 中的方法org.apache.spark.sql.SQLContext
:: DeveloperApi :: Creates a DataFrame from an RDD containing Rows using the given schema.
createDataFrame(JavaRDD<Row>, StructType) - 类 中的方法org.apache.spark.sql.SQLContext
:: DeveloperApi :: Creates a DataFrame from a JavaRDD containing Rows using the given schema.
createDataFrame(List<Row>, StructType) - 类 中的方法org.apache.spark.sql.SQLContext
:: DeveloperApi :: Creates a DataFrame from a java.util.List containing Rows using the given schema.
createDataFrame(RDD<?>, Class<?>) - 类 中的方法org.apache.spark.sql.SQLContext
Applies a schema to an RDD of Java Beans.
createDataFrame(JavaRDD<?>, Class<?>) - 类 中的方法org.apache.spark.sql.SQLContext
Applies a schema to an RDD of Java Beans.
createDataFrame(List<?>, Class<?>) - 类 中的方法org.apache.spark.sql.SQLContext
Applies a schema to a List of Java Beans.
createDataset(Seq<T>, Encoder<T>) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a Dataset from a local Seq of data of a given type.
createDataset(RDD<T>, Encoder<T>) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a Dataset from an RDD of a given type.
createDataset(List<T>, Encoder<T>) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a Dataset from a java.util.List of a given type.
createDataset(Seq<T>, Encoder<T>) - 类 中的方法org.apache.spark.sql.SQLContext
Creates a Dataset from a local Seq of data of a given type.
createDataset(RDD<T>, Encoder<T>) - 类 中的方法org.apache.spark.sql.SQLContext
Creates a Dataset from an RDD of a given type.
createDataset(List<T>, Encoder<T>) - 类 中的方法org.apache.spark.sql.SQLContext
Creates a Dataset from a java.util.List of a given type.
createDecimalType(int, int) - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates a DecimalType by specifying the precision and scale.
createDecimalType() - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates a DecimalType with default precision and scale, which are 10 and 0.
createDF(RDD<byte[]>, StructType, SparkSession) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
createDirectory(File) - 类 中的静态方法org.apache.spark.util.Utils
Create a directory given the abstract pathname
createDirectory(String, String) - 类 中的静态方法org.apache.spark.util.Utils
Create a directory inside the given parent directory.
createdTempDir() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
createdTempDir() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
createdTempDir() - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
createFilter(StructType, Filter[]) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFilters
 
createFunction(String, CatalogFunction) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Create a function in an existing database.
createGlobalTempView(String) - 类 中的方法org.apache.spark.sql.Dataset
Creates a global temporary view using the given name.
CreateHiveTableAsSelectBase - org.apache.spark.sql.hive.execution中的接口
 
CreateHiveTableAsSelectCommand - org.apache.spark.sql.hive.execution中的类
Create table and insert the query result into it.
CreateHiveTableAsSelectCommand(CatalogTable, LogicalPlan, Seq<String>, SaveMode) - 类 的构造器org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
createJar(Seq<File>, File, Option<String>, Option<String>) - 类 中的静态方法org.apache.spark.TestUtils
Create a jar file that contains this set of files.
createJarWithClasses(Seq<String>, String, Seq<Tuple2<String, String>>, Seq<URL>) - 类 中的静态方法org.apache.spark.TestUtils
Create a jar that defines classes with the given names.
createJarWithFiles(Map<String, String>, File) - 类 中的静态方法org.apache.spark.TestUtils
Create a jar file containing multiple files.
createJobContext(String, int) - 类 中的方法org.apache.spark.internal.io.HadoopWriteConfigUtil
 
createJobID(Date, int) - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriterUtils
 
createJobTrackerID(Date) - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriterUtils
 
createKey(SparkConf) - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
Creates a new encryption key.
createListeners(SparkConf, ElementTrackingStore) - 接口 中的方法org.apache.spark.status.AppHistoryServerPlugin
Creates listeners to replay the event logs.
createLogForDriver(SparkConf, String, Configuration) - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
Create a WriteAheadLog for the driver.
createLogForReceiver(SparkConf, String, Configuration) - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
Create a WriteAheadLog for the receiver.
createMapOutputWriter(int, long, int) - 接口 中的方法org.apache.spark.shuffle.api.ShuffleExecutorComponents
Called once per map task to create a writer that will be responsible for persisting all the partitioned bytes written by that map task.
createMapType(DataType, DataType) - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates a MapType by specifying the data type of keys (keyType) and values (keyType).
createMapType(DataType, DataType, boolean) - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates a MapType by specifying the data type of keys (keyType), the data type of values (keyType), and whether values contain any null value (valueContainsNull).
createMetrics(long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long) - 类 中的静态方法org.apache.spark.status.LiveEntityHelpers
 
createMetrics(long) - 类 中的静态方法org.apache.spark.status.LiveEntityHelpers
 
createModel(DenseVector<Object>) - 接口 中的方法org.apache.spark.ml.ann.Layer
Returns the instance of the layer based on weights provided.
createNamespace(String[], Map<String, String>) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
createNamespace(String[], Map<String, String>) - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsNamespaces
Create a namespace in the catalog.
createOrReplace() - 接口 中的方法org.apache.spark.sql.CreateTableWriter
Create a new table or replace an existing table with the contents of the data frame.
createOrReplace() - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
createOrReplaceGlobalTempView(String) - 类 中的方法org.apache.spark.sql.Dataset
Creates or replaces a global temporary view using the given name.
createOrReplaceTempView(String) - 类 中的方法org.apache.spark.sql.Dataset
Creates a local temporary view using the given name.
createOutputOperationFailureForUI(String) - 类 中的静态方法org.apache.spark.streaming.ui.UIUtils
 
createPartitions(String, String, Seq<CatalogTablePartition>, boolean) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Create one or many partitions in the given table.
createPathFromString(String, JobConf) - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriterUtils
 
createPMMLModelExport(Object) - 类 中的静态方法org.apache.spark.mllib.pmml.export.PMMLModelExportFactory
Factory object to help creating the necessary PMMLModelExport implementation taking as input the machine learning model (for example KMeansModel).
createProxyHandler(Function1<String, Option<String>>) - 类 中的静态方法org.apache.spark.ui.JettyUtils
Create a handler for proxying request to Workers and Application Drivers
createProxyLocationHeader(String, HttpServletRequest, URI) - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
createProxyURI(String, String, String, String) - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
createRDDFromArray(JavaSparkContext, byte[][]) - 类 中的静态方法org.apache.spark.api.r.RRDD
Create an RRDD given a sequence of byte arrays.
createRDDFromFile(JavaSparkContext, String, int) - 类 中的静态方法org.apache.spark.api.r.RRDD
Create an RRDD given a temporary file name.
createReadableChannel(ReadableByteChannel, SparkConf, byte[]) - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
Wrap a ReadableByteChannel for decryption.
createReader(InputPartition) - 接口 中的方法org.apache.spark.sql.connector.read.PartitionReaderFactory
Returns a row-based partition reader to read data from the given InputPartition.
createReader(InputPartition) - 接口 中的方法org.apache.spark.sql.connector.read.streaming.ContinuousPartitionReaderFactory
 
createReaderFactory() - 接口 中的方法org.apache.spark.sql.connector.read.Batch
Returns a factory to create a PartitionReader for each InputPartition.
createReaderFactory() - 接口 中的方法org.apache.spark.sql.connector.read.streaming.MicroBatchStream
Returns a factory to create a PartitionReader for each InputPartition.
createRedirectHandler(String, String, Function1<HttpServletRequest, BoxedUnit>, String, Set<String>) - 类 中的静态方法org.apache.spark.ui.JettyUtils
Create a handler that always redirects the user to the given path
createRelation(SQLContext, SaveMode, Map<String, String>, Dataset<Row>) - 接口 中的方法org.apache.spark.sql.sources.CreatableRelationProvider
Saves a DataFrame to a destination (using data source-specific parameters)
createRelation(SQLContext, Map<String, String>) - 接口 中的方法org.apache.spark.sql.sources.RelationProvider
Returns a new base relation with the given parameters.
createRelation(SQLContext, Map<String, String>, StructType) - 接口 中的方法org.apache.spark.sql.sources.SchemaRelationProvider
Returns a new base relation with the given parameters and user defined schema.
createSchedulerBackend(SparkContext, String, TaskScheduler) - 接口 中的方法org.apache.spark.scheduler.ExternalClusterManager
Create a scheduler backend for the given SparkContext and scheduler.
createSecret(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
 
createServletHandler(String, JettyUtils.ServletParams<T>, SparkConf, String) - 类 中的静态方法org.apache.spark.ui.JettyUtils
Create a context handler that responds to a request with the given path prefix
createServletHandler(String, HttpServlet, String) - 类 中的静态方法org.apache.spark.ui.JettyUtils
Create a context handler that responds to a request with the given path prefix
createSingleFileMapOutputWriter(int, long) - 接口 中的方法org.apache.spark.shuffle.api.ShuffleExecutorComponents
An optional extension for creating a map output writer that can optimize the transfer of a single partition file, as the entire result of a map task, to the backing store.
createSink(SQLContext, Map<String, String>, Seq<String>, OutputMode) - 接口 中的方法org.apache.spark.sql.sources.StreamSinkProvider
 
createSource(SQLContext, String, Option<StructType>, String, Map<String, String>) - 接口 中的方法org.apache.spark.sql.sources.StreamSourceProvider
 
createSparkContext(String, String, String, String[], Map<Object, Object>, Map<Object, Object>) - 类 中的静态方法org.apache.spark.api.r.RRDD
 
createStaticHandler(String, String) - 类 中的静态方法org.apache.spark.ui.JettyUtils
Create a handler for serving files from a static directory
createStream(JavaStreamingContext, String, String, String, String, int, Duration, StorageLevel, String, String, String, String, String) - 类 中的方法org.apache.spark.streaming.kinesis.KinesisUtilsPythonHelper
 
createStreamingWriterFactory() - 接口 中的方法org.apache.spark.sql.connector.write.streaming.StreamingWrite
Creates a writer factory which will be serialized and sent to executors.
createStructField(String, String, boolean) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
createStructField(String, DataType, boolean, Metadata) - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates a StructField by specifying the name (name), data type (dataType) and whether values of this field can be null values (nullable).
createStructField(String, DataType, boolean) - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates a StructField with empty metadata.
createStructType(Seq<StructField>) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
createStructType(List<StructField>) - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates a StructType with the given list of StructFields (fields).
createStructType(StructField[]) - 类 中的静态方法org.apache.spark.sql.types.DataTypes
Creates a StructType with the given StructField array (fields).
createTable(String, String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Creates a table from the given path and returns the corresponding DataFrame.
createTable(String, String, String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Creates a table from the given path based on a data source and returns the corresponding DataFrame.
createTable(String, String, Map<String, String>) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Creates a table based on the dataset in a data source and a set of options.
createTable(String, String, Map<String, String>) - 类 中的方法org.apache.spark.sql.catalog.Catalog
(Scala-specific) Creates a table based on the dataset in a data source and a set of options.
createTable(String, String, StructType, Map<String, String>) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Create a table based on the dataset in a data source, a schema and a set of options.
createTable(String, String, StructType, Map<String, String>) - 类 中的方法org.apache.spark.sql.catalog.Catalog
(Scala-specific) Create a table based on the dataset in a data source, a schema and a set of options.
createTable(Identifier, StructType, Transform[], Map<String, String>) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
createTable(Identifier, StructType, Transform[], Map<String, String>) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableCatalog
Create a table in the catalog.
createTable(CatalogTable, boolean) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Creates a table with the given metadata.
CreateTableWriter<T> - org.apache.spark.sql中的接口
Trait to restrict calls to create and replace operations.
createTaskAttemptContext(String, int, int, int) - 类 中的方法org.apache.spark.internal.io.HadoopWriteConfigUtil
 
createTaskScheduler(SparkContext, String) - 接口 中的方法org.apache.spark.scheduler.ExternalClusterManager
Create a task scheduler instance for the given SparkContext
createTempDir(String, String) - 类 中的静态方法org.apache.spark.util.Utils
Create a temporary directory inside the given parent directory.
createTempJsonFile(File, String, JsonAST.JValue) - 类 中的静态方法org.apache.spark.TestUtils
Creates a temp JSON file that contains the input JSON record.
createTempScriptWithExpectedOutput(File, String, String) - 类 中的静态方法org.apache.spark.TestUtils
Creates a temp bash script that prints the given output.
createTempView(String) - 类 中的方法org.apache.spark.sql.Dataset
Creates a local temporary view using the given name.
createUnsafe(long, int, int) - 类 中的静态方法org.apache.spark.sql.types.Decimal
Creates a decimal from unscaled, precision and scale without checking the bounds.
createWorkspace(int) - 类 中的静态方法org.apache.spark.mllib.optimization.NNLS
 
createWritableChannel(WritableByteChannel, SparkConf, byte[]) - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
Wrap a WritableByteChannel for encryption.
createWriter(int, long) - 接口 中的方法org.apache.spark.sql.connector.write.DataWriterFactory
Returns a data writer to do the actual writing work.
createWriter(int, long, long) - 接口 中的方法org.apache.spark.sql.connector.write.streaming.StreamingDataWriterFactory
Returns a data writer to do the actual writing work.
crossJoin(Dataset<?>) - 类 中的方法org.apache.spark.sql.Dataset
Explicit cartesian join with another DataFrame.
crosstab(String, String) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Computes a pair-wise frequency table of the given columns.
CrossValidator - org.apache.spark.ml.tuning中的类
K-fold cross validation performs model selection by splitting the dataset into a set of non-overlapping randomly partitioned folds which are used as separate training and test datasets e.g., with k=3 folds, K-fold cross validation will generate 3 (training, test) dataset pairs, each of which uses 2/3 of the data for training and 1/3 for testing.
CrossValidator(String) - 类 的构造器org.apache.spark.ml.tuning.CrossValidator
 
CrossValidator() - 类 的构造器org.apache.spark.ml.tuning.CrossValidator
 
CrossValidatorModel - org.apache.spark.ml.tuning中的类
CrossValidatorModel contains the model with the highest average cross-validation metric across folds and uses this model to transform input data.
CrossValidatorModel.CrossValidatorModelWriter - org.apache.spark.ml.tuning中的类
Writer for CrossValidatorModel.
CrossValidatorParams - org.apache.spark.ml.tuning中的接口
CryptoStreamUtils - org.apache.spark.security中的类
A util class for manipulating IO encryption and decryption streams.
CryptoStreamUtils() - 类 的构造器org.apache.spark.security.CryptoStreamUtils
 
CryptoStreamUtils.BaseErrorHandler - org.apache.spark.security中的接口
SPARK-25535.
CryptoStreamUtils.ErrorHandlingReadableChannel - org.apache.spark.security中的类
 
csv(String...) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads CSV files and returns the result as a DataFrame.
csv(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads a CSV file and returns the result as a DataFrame.
csv(Dataset<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads an Dataset[String] storing CSV rows and returns the result as a DataFrame.
csv(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads CSV files and returns the result as a DataFrame.
csv(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame in CSV format at the specified path.
csv(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Loads a CSV file stream and returns the result as a DataFrame.
cube(Column...) - 类 中的方法org.apache.spark.sql.Dataset
Create a multi-dimensional cube for the current Dataset using the specified columns, so we can run aggregation on them.
cube(String, String...) - 类 中的方法org.apache.spark.sql.Dataset
Create a multi-dimensional cube for the current Dataset using the specified columns, so we can run aggregation on them.
cube(Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Create a multi-dimensional cube for the current Dataset using the specified columns, so we can run aggregation on them.
cube(String, Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Create a multi-dimensional cube for the current Dataset using the specified columns, so we can run aggregation on them.
CubeType$() - 类 的构造器org.apache.spark.sql.RelationalGroupedDataset.CubeType$
 
cume_dist() - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the cumulative distribution of values within a window partition, i.e. the fraction of rows that are below the current row.
curId() - 类 中的静态方法org.apache.spark.sql.Dataset
 
current_date() - 类 中的静态方法org.apache.spark.sql.functions
Returns the current date as a date column.
current_timestamp() - 类 中的静态方法org.apache.spark.sql.functions
Returns the current timestamp as a timestamp column.
currentAttemptId() - 接口 中的方法org.apache.spark.SparkStageInfo
 
currentAttemptId() - 类 中的方法org.apache.spark.SparkStageInfoImpl
 
currentCatalog() - 接口 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog
Returns the current catalog set.
currentDatabase() - 类 中的方法org.apache.spark.sql.catalog.Catalog
Returns the current default database in this session.
currentResult() - 接口 中的方法org.apache.spark.partial.ApproximateEvaluator
 
currentRow() - 类 中的静态方法org.apache.spark.sql.expressions.Window
Value representing the current row.
currPrefLocs(Partition, RDD<?>) - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
CUSTOM_EXECUTOR_LOG_URL() - 类 中的静态方法org.apache.spark.internal.config.History
 
CUSTOM_EXECUTOR_LOG_URL() - 类 中的静态方法org.apache.spark.internal.config.UI
 
customMetrics() - 类 中的方法org.apache.spark.sql.streaming.StateOperatorProgress
 

D

DAGSchedulerEvent - org.apache.spark.scheduler中的接口
Types of events that can be handled by the DAGScheduler.
dapply(Dataset<Row>, byte[], byte[], Object[], StructType) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
The helper function for dapply() on R side.
Data(Vector, double, Option<Object>) - 类 的构造器org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
 
Data(double[], double[], double[][]) - 类 的构造器org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
 
Data(double[], double[], double[][], String) - 类 的构造器org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
 
Data(int) - 类 的构造器org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
 
Data(Vector, double) - 类 的构造器org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
 
data() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
 
data() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
 
data() - 类 中的方法org.apache.spark.storage.ShuffleFetchCompletionListener
 
Data$() - 类 的构造器org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data$
 
Data$() - 类 的构造器org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data$
 
Data$() - 类 的构造器org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data$
 
Data$() - 类 的构造器org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data$
 
Data$() - 类 的构造器org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data$
 
Database - org.apache.spark.sql.catalog中的类
A database in Spark, as returned by the listDatabases method defined in Catalog.
Database(String, String, String) - 类 的构造器org.apache.spark.sql.catalog.Database
 
database() - 类 中的方法org.apache.spark.sql.catalog.Function
 
database() - 类 中的方法org.apache.spark.sql.catalog.Table
 
databaseExists(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Check if the database with the specified name exists.
databaseExists(String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Return whether a table/view with the specified name exists.
databaseTypeDefinition() - 类 中的方法org.apache.spark.sql.jdbc.JdbcType
 
dataDistribution() - 类 中的方法org.apache.spark.status.api.v1.RDDStorageInfo
 
DATAFRAME_DAPPLY() - 类 中的静态方法org.apache.spark.api.r.RRunnerModes
 
DATAFRAME_GAPPLY() - 类 中的静态方法org.apache.spark.api.r.RRunnerModes
 
DataFrameNaFunctions - org.apache.spark.sql中的类
Functionality for working with missing data in DataFrames.
DataFrameReader - org.apache.spark.sql中的类
Interface used to load a Dataset from external storage systems (e.g. file systems, key-value stores, etc).
DataFrameStatFunctions - org.apache.spark.sql中的类
Statistic functions for DataFrames.
DataFrameWriter<T> - org.apache.spark.sql中的类
Interface used to write a Dataset to external storage systems (e.g. file systems, key-value stores, etc).
DataFrameWriterV2<T> - org.apache.spark.sql中的类
Interface used to write a Dataset to external storage using the v2 API.
dataset() - 类 中的方法org.apache.spark.ml.FitStart
 
Dataset<T> - org.apache.spark.sql中的类
A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations.
Dataset(SparkSession, LogicalPlan, Encoder<T>) - 类 的构造器org.apache.spark.sql.Dataset
 
Dataset(SQLContext, LogicalPlan, Encoder<T>) - 类 的构造器org.apache.spark.sql.Dataset
 
DATASET_ID_KEY() - 类 中的静态方法org.apache.spark.sql.Dataset
 
DATASET_ID_TAG() - 类 中的静态方法org.apache.spark.sql.Dataset
 
DatasetHolder<T> - org.apache.spark.sql中的类
A container for a Dataset, used for implicit conversions in Scala.
DatasetUtils - org.apache.spark.ml.util中的类
 
DatasetUtils() - 类 的构造器org.apache.spark.ml.util.DatasetUtils
 
dataSource() - 接口 中的方法org.apache.spark.ui.PagedTable
 
DataSourceRegister - org.apache.spark.sql.sources中的接口
Data sources should implement this trait so that they can register an alias to their data source.
DataStreamReader - org.apache.spark.sql.streaming中的类
Interface used to load a streaming Dataset from external storage systems (e.g. file systems, key-value stores, etc).
DataStreamWriter<T> - org.apache.spark.sql.streaming中的类
Interface used to write a streaming Dataset to external storage systems (e.g. file systems, key-value stores, etc).
dataTablesHeaderNodes(HttpServletRequest) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
dataType() - 类 中的方法org.apache.spark.sql.catalog.Column
 
dataType() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.AddColumn
 
dataType() - 接口 中的方法org.apache.spark.sql.connector.expressions.Literal
Returns the SQL data type of the literal.
dataType() - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
The DataType of the returned value of this UserDefinedAggregateFunction.
DataType - org.apache.spark.sql.types中的类
The base type of all Spark SQL data types.
DataType() - 类 的构造器org.apache.spark.sql.types.DataType
 
dataType() - 类 中的方法org.apache.spark.sql.types.StructField
 
dataType() - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the data type of this column vector.
DataTypes - org.apache.spark.sql.types中的类
To get/create specific data type, users should use singleton objects and factory methods provided by this class.
DataTypes() - 类 的构造器org.apache.spark.sql.types.DataTypes
 
DataValidators - org.apache.spark.mllib.util中的类
:: DeveloperApi :: A collection of methods used to validate data before applying ML algorithms.
DataValidators() - 类 的构造器org.apache.spark.mllib.util.DataValidators
 
DataWriter<T> - org.apache.spark.sql.connector.write中的接口
A data writer returned by DataWriterFactory.createWriter(int, long) and is responsible for writing data for an input RDD partition.
DataWriterFactory - org.apache.spark.sql.connector.write中的接口
A factory of DataWriter returned by BatchWrite.createBatchWriterFactory(), which is responsible for creating and initializing the actual data writer at executor side.
date() - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type date.
DATE() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable date type.
date_add(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Returns the date that is days days after start
date_add(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the date that is days days after start
date_format(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Converts a date/timestamp/string to a value of string in the format specified by the date format given by the second argument.
date_sub(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Returns the date that is days days before start
date_sub(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the date that is days days before start
date_trunc(String, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns timestamp truncated to the unit specified by the format.
datediff(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the number of days from start to end.
DateType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the DateType object.
DateType - org.apache.spark.sql.types中的类
The date type represents a valid date in the proleptic Gregorian calendar.
DateType() - 类 的构造器org.apache.spark.sql.types.DateType
 
dayofmonth(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the day of the month as an integer from a given date/timestamp/string.
dayofweek(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the day of the week as an integer from a given date/timestamp/string.
dayofyear(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the day of the year as an integer from a given date/timestamp/string.
days(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Expressions
Create a daily transform for a timestamp or date column.
days(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
days(Column) - 类 中的静态方法org.apache.spark.sql.functions
A transform for timestamps and dates to partition data into days.
DB2Dialect - org.apache.spark.sql.jdbc中的类
 
DB2Dialect() - 类 的构造器org.apache.spark.sql.jdbc.DB2Dialect
 
DCT - org.apache.spark.ml.feature中的类
A feature transformer that takes the 1D discrete cosine transform of a real vector.
DCT(String) - 类 的构造器org.apache.spark.ml.feature.DCT
 
DCT() - 类 的构造器org.apache.spark.ml.feature.DCT
 
deallocate() - 类 中的方法org.apache.spark.storage.ReadableChannelFileRegion
 
decayFactor() - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
 
decide(LoggingEvent) - 类 中的方法org.apache.spark.internal.SparkShellLoggingFilter
If sparkShellThresholdLevel is not defined, this filter is a no-op.
decimal() - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type decimal.
decimal(int, int) - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type decimal.
DECIMAL() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable decimal type.
Decimal - org.apache.spark.sql.types中的类
A mutable implementation of BigDecimal that can hold a Long if values are small enough.
Decimal() - 类 的构造器org.apache.spark.sql.types.Decimal
 
Decimal.DecimalAsIfIntegral$ - org.apache.spark.sql.types中的类
A Integral evidence parameter for Decimals.
Decimal.DecimalIsConflicted - org.apache.spark.sql.types中的接口
Common methods for Decimal evidence parameters
Decimal.DecimalIsFractional$ - org.apache.spark.sql.types中的类
A Fractional evidence parameter for Decimals.
DecimalAsIfIntegral$() - 类 的构造器org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
 
DecimalExactNumeric - org.apache.spark.sql.types中的类
 
DecimalExactNumeric() - 类 的构造器org.apache.spark.sql.types.DecimalExactNumeric
 
DecimalIsFractional$() - 类 的构造器org.apache.spark.sql.types.Decimal.DecimalIsFractional$
 
DecimalType - org.apache.spark.sql.types中的类
The data type representing java.math.BigDecimal values.
DecimalType(int, int) - 类 的构造器org.apache.spark.sql.types.DecimalType
 
DecimalType(int) - 类 的构造器org.apache.spark.sql.types.DecimalType
 
DecimalType() - 类 的构造器org.apache.spark.sql.types.DecimalType
 
DecimalType.Expression$ - org.apache.spark.sql.types中的类
 
DecimalType.Fixed$ - org.apache.spark.sql.types中的类
 
decimalTypeInfoToCatalyst(PrimitiveObjectInspector) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
DecisionTree - org.apache.spark.mllib.tree中的类
A class which implements a decision tree learning algorithm for classification and regression.
DecisionTree(Strategy) - 类 的构造器org.apache.spark.mllib.tree.DecisionTree
 
DecisionTreeClassificationModel - org.apache.spark.ml.classification中的类
Decision tree model (http://en.wikipedia.org/wiki/Decision_tree_learning) for classification.
DecisionTreeClassifier - org.apache.spark.ml.classification中的类
Decision tree learning algorithm (http://en.wikipedia.org/wiki/Decision_tree_learning) for classification.
DecisionTreeClassifier(String) - 类 的构造器org.apache.spark.ml.classification.DecisionTreeClassifier
 
DecisionTreeClassifier() - 类 的构造器org.apache.spark.ml.classification.DecisionTreeClassifier
 
DecisionTreeClassifierParams - org.apache.spark.ml.tree中的接口
 
DecisionTreeModel - org.apache.spark.ml.tree中的接口
Abstraction for Decision Tree models.
DecisionTreeModel - org.apache.spark.mllib.tree.model中的类
Decision tree model for classification or regression.
DecisionTreeModel(Node, Enumeration.Value) - 类 的构造器org.apache.spark.mllib.tree.model.DecisionTreeModel
 
DecisionTreeModel.SaveLoadV1_0$ - org.apache.spark.mllib.tree.model中的类
 
DecisionTreeModel.SaveLoadV1_0$.NodeData - org.apache.spark.mllib.tree.model中的类
Model data for model import/export
DecisionTreeModel.SaveLoadV1_0$.NodeData$ - org.apache.spark.mllib.tree.model中的类
 
DecisionTreeModel.SaveLoadV1_0$.PredictData - org.apache.spark.mllib.tree.model中的类
 
DecisionTreeModel.SaveLoadV1_0$.PredictData$ - org.apache.spark.mllib.tree.model中的类
 
DecisionTreeModel.SaveLoadV1_0$.SplitData - org.apache.spark.mllib.tree.model中的类
 
DecisionTreeModel.SaveLoadV1_0$.SplitData$ - org.apache.spark.mllib.tree.model中的类
 
DecisionTreeModelReadWrite - org.apache.spark.ml.tree中的类
Helper classes for tree model persistence
DecisionTreeModelReadWrite() - 类 的构造器org.apache.spark.ml.tree.DecisionTreeModelReadWrite
 
DecisionTreeModelReadWrite.NodeData - org.apache.spark.ml.tree中的类
Info for a Node param: id Index used for tree reconstruction.
DecisionTreeModelReadWrite.NodeData$ - org.apache.spark.ml.tree中的类
 
DecisionTreeModelReadWrite.SplitData - org.apache.spark.ml.tree中的类
Info for a Split param: featureIndex Index of feature split on param: leftCategoriesOrThreshold For categorical feature, set of leftCategories.
DecisionTreeModelReadWrite.SplitData$ - org.apache.spark.ml.tree中的类
 
DecisionTreeParams - org.apache.spark.ml.tree中的接口
Parameters for Decision Tree-based algorithms.
DecisionTreeRegressionModel - org.apache.spark.ml.regression中的类
Decision tree (Wikipedia) model for regression.
DecisionTreeRegressor - org.apache.spark.ml.regression中的类
Decision tree learning algorithm for regression.
DecisionTreeRegressor(String) - 类 的构造器org.apache.spark.ml.regression.DecisionTreeRegressor
 
DecisionTreeRegressor() - 类 的构造器org.apache.spark.ml.regression.DecisionTreeRegressor
 
DecisionTreeRegressorParams - org.apache.spark.ml.tree中的接口
 
decode(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the first argument into a string from a binary using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16').
decodeFileNameInURI(URI) - 类 中的静态方法org.apache.spark.util.Utils
Get the file name from uri's raw path and decode it.
decodeStructField(StructField, boolean) - 接口 中的方法org.apache.spark.ml.attribute.AttributeFactory
Creates an Attribute from a StructField instance, optionally preserving name.
decodeURLParameter(String) - 类 中的静态方法org.apache.spark.ui.UIUtils
Decode URLParameter if URL is encoded by YARN-WebAppProxyServlet.
DedicatedMessageLoop - org.apache.spark.rpc.netty中的类
A message loop that is dedicated to a single RPC endpoint.
DedicatedMessageLoop(String, IsolatedRpcEndpoint, Dispatcher) - 类 的构造器org.apache.spark.rpc.netty.DedicatedMessageLoop
 
DEFAULT_CORES() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
DEFAULT_DRIVER_MEM_MB() - 类 中的静态方法org.apache.spark.util.Utils
Define a default value for driver memory here since this value is referenced across the code base and nearly all files already use Utils.scala
DEFAULT_LOG_DIR() - 类 中的静态方法org.apache.spark.internal.config.History
 
DEFAULT_MAX_FAILURES() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
DEFAULT_NUM_OUTPUT_ROWS() - 类 中的静态方法org.apache.spark.sql.streaming.SinkProgress
 
DEFAULT_NUMBER_EXECUTORS() - 类 中的静态方法org.apache.spark.scheduler.cluster.SchedulerBackendUtils
 
DEFAULT_ROLLING_INTERVAL_SECS() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
DEFAULT_SASL_KERBEROS_SERVICE_NAME() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenSparkConf
 
DEFAULT_SASL_TOKEN_MECHANISM() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenSparkConf
 
DEFAULT_SECURITY_PROTOCOL_CONFIG() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenSparkConf
 
DEFAULT_SHUTDOWN_PRIORITY() - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
 
DEFAULT_TARGET_SERVERS_REGEX() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenSparkConf
 
defaultAttr() - 类 中的静态方法org.apache.spark.ml.attribute.BinaryAttribute
The default binary attribute.
defaultAttr() - 类 中的静态方法org.apache.spark.ml.attribute.NominalAttribute
The default nominal attribute.
defaultAttr() - 类 中的静态方法org.apache.spark.ml.attribute.NumericAttribute
The default numeric attribute.
defaultCopy(ParamMap) - 接口 中的方法org.apache.spark.ml.param.Params
Default implementation of copy with extra params.
defaultCorrName() - 类 中的静态方法org.apache.spark.mllib.stat.correlation.CorrelationNames
 
DefaultCredentials - org.apache.spark.streaming.kinesis中的类
Returns DefaultAWSCredentialsProviderChain for authentication.
DefaultCredentials() - 类 的构造器org.apache.spark.streaming.kinesis.DefaultCredentials
 
defaultLink() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
 
defaultLink() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
 
defaultLink() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
 
defaultLink() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
 
defaultMinPartitions() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Default min number of partitions for Hadoop RDDs when not given by user
defaultMinPartitions() - 类 中的方法org.apache.spark.SparkContext
Default min number of partitions for Hadoop RDDs when not given by user Notice that we use math.min so the "defaultMinPartitions" cannot be higher than 2.
defaultNamespace() - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
defaultNamespace() - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsNamespaces
Return a default namespace for the catalog.
defaultParallelism() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Default level of parallelism to use when not given by user (e.g. parallelize and makeRDD).
defaultParallelism() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
 
defaultParallelism() - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
defaultParallelism() - 类 中的方法org.apache.spark.SparkContext
Default level of parallelism to use when not given by user (e.g. parallelize and makeRDD).
defaultParamMap() - 接口 中的方法org.apache.spark.ml.param.Params
Internal param map for default values.
defaultParams(String) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
Returns default configuration for the boosting algorithm
defaultParams(Enumeration.Value) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
Returns default configuration for the boosting algorithm
DefaultParamsReadable<T> - org.apache.spark.ml.util中的接口
:: DeveloperApi :: Helper trait for making simple Params types readable.
DefaultParamsWritable - org.apache.spark.ml.util中的接口
:: DeveloperApi :: Helper trait for making simple Params types writable.
DefaultPartitionCoalescer - org.apache.spark.rdd中的类
Coalesce the partitions of a parent RDD (prev) into fewer partitions, so that each partition of this RDD computes one or more of the parent ones.
DefaultPartitionCoalescer(double) - 类 的构造器org.apache.spark.rdd.DefaultPartitionCoalescer
 
DefaultPartitionCoalescer.partitionGroupOrdering$ - org.apache.spark.rdd中的类
 
defaultPartitioner(RDD<?>, Seq<RDD<?>>) - 类 中的静态方法org.apache.spark.Partitioner
Choose a partitioner to use for a cogroup-like operation between a number of RDDs.
defaultSize() - 类 中的方法org.apache.spark.sql.types.ArrayType
The default size of a value of the ArrayType is the default size of the element type.
defaultSize() - 类 中的方法org.apache.spark.sql.types.BinaryType
The default size of a value of the BinaryType is 100 bytes.
defaultSize() - 类 中的方法org.apache.spark.sql.types.BooleanType
The default size of a value of the BooleanType is 1 byte.
defaultSize() - 类 中的方法org.apache.spark.sql.types.ByteType
The default size of a value of the ByteType is 1 byte.
defaultSize() - 类 中的方法org.apache.spark.sql.types.CalendarIntervalType
 
defaultSize() - 类 中的方法org.apache.spark.sql.types.DataType
The default size of a value of this data type, used internally for size estimation.
defaultSize() - 类 中的方法org.apache.spark.sql.types.DateType
The default size of a value of the DateType is 4 bytes.
defaultSize() - 类 中的方法org.apache.spark.sql.types.DecimalType
The default size of a value of the DecimalType is 8 bytes when precision is at most 18, and 16 bytes otherwise.
defaultSize() - 类 中的方法org.apache.spark.sql.types.DoubleType
The default size of a value of the DoubleType is 8 bytes.
defaultSize() - 类 中的方法org.apache.spark.sql.types.FloatType
The default size of a value of the FloatType is 4 bytes.
defaultSize() - 类 中的方法org.apache.spark.sql.types.HiveStringType
 
defaultSize() - 类 中的方法org.apache.spark.sql.types.IntegerType
The default size of a value of the IntegerType is 4 bytes.
defaultSize() - 类 中的方法org.apache.spark.sql.types.LongType
The default size of a value of the LongType is 8 bytes.
defaultSize() - 类 中的方法org.apache.spark.sql.types.MapType
The default size of a value of the MapType is (the default size of the key type + the default size of the value type).
defaultSize() - 类 中的方法org.apache.spark.sql.types.NullType
 
defaultSize() - 类 中的方法org.apache.spark.sql.types.ObjectType
 
defaultSize() - 类 中的方法org.apache.spark.sql.types.ShortType
The default size of a value of the ShortType is 2 bytes.
defaultSize() - 类 中的方法org.apache.spark.sql.types.StringType
The default size of a value of the StringType is 20 bytes.
defaultSize() - 类 中的方法org.apache.spark.sql.types.StructType
The default size of a value of the StructType is the total default sizes of all field types.
defaultSize() - 类 中的方法org.apache.spark.sql.types.TimestampType
The default size of a value of the TimestampType is 8 bytes.
defaultStrategy(String) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.Strategy
Construct a default set of parameters for DecisionTree
defaultStrategy(Enumeration.Value) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.Strategy
Construct a default set of parameters for DecisionTree
DefaultTopologyMapper - org.apache.spark.storage中的类
A TopologyMapper that assumes all nodes are in the same rack
DefaultTopologyMapper(SparkConf) - 类 的构造器org.apache.spark.storage.DefaultTopologyMapper
 
defaultValue() - 类 中的方法org.apache.spark.internal.config.ConfigEntryWithDefault
 
defaultValue() - 类 中的方法org.apache.spark.internal.config.ConfigEntryWithDefaultFunction
 
defaultValue() - 类 中的方法org.apache.spark.internal.config.ConfigEntryWithDefaultString
 
defaultValueString() - 类 中的方法org.apache.spark.internal.config.ConfigEntryWithDefault
 
defaultValueString() - 类 中的方法org.apache.spark.internal.config.ConfigEntryWithDefaultFunction
 
defaultValueString() - 类 中的方法org.apache.spark.internal.config.ConfigEntryWithDefaultString
 
degree() - 类 中的方法org.apache.spark.ml.feature.PolynomialExpansion
The polynomial degree to expand, which should be greater than equal to 1.
degrees() - 类 中的方法org.apache.spark.graphx.GraphOps
 
degrees(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts an angle measured in radians to an approximately equivalent angle measured in degrees.
degrees(String) - 类 中的静态方法org.apache.spark.sql.functions
Converts an angle measured in radians to an approximately equivalent angle measured in degrees.
degreesOfFreedom() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
degreesOfFreedom() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
Degrees of freedom
degreesOfFreedom() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTestResult
 
degreesOfFreedom() - 类 中的方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
 
degreesOfFreedom() - 接口 中的方法org.apache.spark.mllib.stat.test.TestResult
Returns the degree(s) of freedom of the hypothesis test.
delegate() - 类 中的方法org.apache.spark.InterruptibleIterator
 
DelegatingCatalogExtension - org.apache.spark.sql.connector.catalog中的类
A simple implementation of CatalogExtension, which implements all the catalog functions by calling the built-in session catalog directly.
DelegatingCatalogExtension() - 类 的构造器org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
delegationTokensRequired(SparkConf, Configuration) - 接口 中的方法org.apache.spark.security.HadoopDelegationTokenProvider
Returns true if delegation tokens are required for this service.
deleteCheckpointFiles() - 类 中的方法org.apache.spark.ml.clustering.DistributedLDAModel
:: DeveloperApi :: Remove any remaining checkpoint files from training.
deleteColumn(String[]) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for deleting a field.
deleteExternalTmpPath(Configuration) - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
deleteRecursively(File) - 类 中的静态方法org.apache.spark.util.Utils
Delete a file or directory and its contents recursively.
deleteWhere(Filter[]) - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsDelete
Delete data from a data source table that matches filter expressions.
deleteWithJob(FileSystem, Path, boolean) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Specifies that a file should be deleted with the commit of this job.
delimiterOptions() - 类 中的静态方法org.apache.spark.sql.hive.execution.HiveOptions
 
delta() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie$
Constant used in initialization and deviance to avoid numerical issues.
dense(int, int, double[]) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Creates a column-major dense matrix.
dense(double, double...) - 类 中的静态方法org.apache.spark.ml.linalg.Vectors
Creates a dense vector from its values.
dense(double, Seq<Object>) - 类 中的静态方法org.apache.spark.ml.linalg.Vectors
Creates a dense vector from its values.
dense(double[]) - 类 中的静态方法org.apache.spark.ml.linalg.Vectors
Creates a dense vector from a double array.
dense(int, int, double[]) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Creates a column-major dense matrix.
dense(double, double...) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Creates a dense vector from its values.
dense(double, Seq<Object>) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Creates a dense vector from its values.
dense(double[]) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Creates a dense vector from a double array.
dense_rank() - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the rank of rows within a window partition, without any gaps.
DenseMatrix - org.apache.spark.ml.linalg中的类
Column-major dense matrix.
DenseMatrix(int, int, double[], boolean) - 类 的构造器org.apache.spark.ml.linalg.DenseMatrix
 
DenseMatrix(int, int, double[]) - 类 的构造器org.apache.spark.ml.linalg.DenseMatrix
Column-major dense matrix.
DenseMatrix - org.apache.spark.mllib.linalg中的类
Column-major dense matrix.
DenseMatrix(int, int, double[], boolean) - 类 的构造器org.apache.spark.mllib.linalg.DenseMatrix
 
DenseMatrix(int, int, double[]) - 类 的构造器org.apache.spark.mllib.linalg.DenseMatrix
Column-major dense matrix.
DenseVector - org.apache.spark.ml.linalg中的类
A dense vector represented by a value array.
DenseVector(double[]) - 类 的构造器org.apache.spark.ml.linalg.DenseVector
 
DenseVector - org.apache.spark.mllib.linalg中的类
A dense vector represented by a value array.
DenseVector(double[]) - 类 的构造器org.apache.spark.mllib.linalg.DenseVector
 
dependencies() - 类 中的方法org.apache.spark.rdd.RDD
Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.
dependencies() - 类 中的方法org.apache.spark.streaming.dstream.DStream
List of parent DStreams on which this DStream depends on
dependencies() - 类 中的方法org.apache.spark.streaming.dstream.InputDStream
 
Dependency<T> - org.apache.spark中的类
:: DeveloperApi :: Base class for dependencies.
Dependency() - 类 的构造器org.apache.spark.Dependency
 
Deploy - org.apache.spark.internal.config中的类
 
Deploy() - 类 的构造器org.apache.spark.internal.config.Deploy
 
DEPLOY_MODE - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
The Spark deploy mode.
deployMode() - 类 中的方法org.apache.spark.SparkContext
 
depth() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
depth() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
depth() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeModel
Depth of the tree.
depth() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
Get depth of tree.
depth() - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Depth of this CountMinSketch.
DerbyDialect - org.apache.spark.sql.jdbc中的类
 
DerbyDialect() - 类 的构造器org.apache.spark.sql.jdbc.DerbyDialect
 
deriv(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
 
deriv(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
 
deriv(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
 
deriv(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
 
deriv(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
 
deriv(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
 
deriv(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
 
derivative() - 接口 中的方法org.apache.spark.ml.ann.ActivationFunction
Implements a derivative of a function (needed for the back propagation)
desc() - 类 中的方法org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
 
desc() - 类 中的方法org.apache.spark.sql.Column
Returns a sort expression based on the descending order of the column.
desc(String) - 类 中的静态方法org.apache.spark.sql.functions
Returns a sort expression based on the descending order of the column.
desc() - 类 中的方法org.apache.spark.util.MethodIdentifier
 
desc_nulls_first() - 类 中的方法org.apache.spark.sql.Column
Returns a sort expression based on the descending order of the column, and null values appear before non-null values.
desc_nulls_first(String) - 类 中的静态方法org.apache.spark.sql.functions
Returns a sort expression based on the descending order of the column, and null values appear before non-null values.
desc_nulls_last() - 类 中的方法org.apache.spark.sql.Column
Returns a sort expression based on the descending order of the column, and null values appear after non-null values.
desc_nulls_last(String) - 类 中的静态方法org.apache.spark.sql.functions
Returns a sort expression based on the descending order of the column, and null values appear after non-null values.
describe() - 接口 中的方法org.apache.spark.sql.connector.expressions.Expression
Format the expression as a human readable SQL-like string.
describe(String...) - 类 中的方法org.apache.spark.sql.Dataset
Computes basic statistics for numeric and string columns, including count, mean, stddev, min, and max.
describe(Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Computes basic statistics for numeric and string columns, including count, mean, stddev, min, and max.
describeTopics(int) - 类 中的方法org.apache.spark.ml.clustering.LDAModel
Return the topics described by their top-weighted terms.
describeTopics() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
describeTopics(int) - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
describeTopics(int) - 类 中的方法org.apache.spark.mllib.clustering.LDAModel
Return the topics described by weighted terms.
describeTopics() - 类 中的方法org.apache.spark.mllib.clustering.LDAModel
Return the topics described by weighted terms.
describeTopics(int) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
 
description() - 类 中的方法org.apache.spark.ExceptionFailure
 
description() - 类 中的方法org.apache.spark.sql.catalog.Column
 
description() - 类 中的方法org.apache.spark.sql.catalog.Database
 
description() - 类 中的方法org.apache.spark.sql.catalog.Function
 
description() - 类 中的方法org.apache.spark.sql.catalog.Table
 
description() - 接口 中的方法org.apache.spark.sql.connector.read.Scan
A description string of this scan, which may includes information like: what filters are configured for this scan, what's the value of some important options like path, etc.
description() - 类 中的方法org.apache.spark.sql.streaming.SinkProgress
 
description() - 类 中的方法org.apache.spark.sql.streaming.SourceProgress
 
description() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
description() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
description() - 类 中的方法org.apache.spark.status.api.v1.streaming.OutputOperationInfo
 
description() - 类 中的方法org.apache.spark.status.LiveStage
 
description() - 类 中的方法org.apache.spark.storage.StorageLevel
 
description() - 类 中的方法org.apache.spark.streaming.scheduler.OutputOperationInfo
 
DESER_CPU_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
DESER_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
DeserializationStream - org.apache.spark.serializer中的类
:: DeveloperApi :: A stream for reading serialized objects.
DeserializationStream() - 类 的构造器org.apache.spark.serializer.DeserializationStream
 
deserialize(Object) - 类 中的方法org.apache.spark.mllib.linalg.VectorUDT
 
deserialize(ByteBuffer, ClassLoader, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.DummySerializerInstance
 
deserialize(ByteBuffer, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.DummySerializerInstance
 
deserialize(ByteBuffer, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.SerializerInstance
 
deserialize(ByteBuffer, ClassLoader, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.SerializerInstance
 
deserialize(byte[]) - 类 中的静态方法org.apache.spark.util.Utils
Deserialize an object using Java serialization
deserialize(byte[], ClassLoader) - 类 中的静态方法org.apache.spark.util.Utils
Deserialize an object using Java serialization and the given ClassLoader
deserialized() - 类 中的方法org.apache.spark.storage.StorageLevel
 
DeserializedMemoryEntry<T> - org.apache.spark.storage.memory中的类
 
DeserializedMemoryEntry(Object, long, ClassTag<T>) - 类 的构造器org.apache.spark.storage.memory.DeserializedMemoryEntry
 
DeserializedValuesHolder<T> - org.apache.spark.storage.memory中的类
A holder for storing the deserialized values.
DeserializedValuesHolder(ClassTag<T>) - 类 的构造器org.apache.spark.storage.memory.DeserializedValuesHolder
 
deserializeLongValue(byte[]) - 类 中的静态方法org.apache.spark.util.Utils
Deserialize a Long value (used for PythonPartitioner)
deserializeOffset(String) - 接口 中的方法org.apache.spark.sql.connector.read.streaming.SparkDataStream
Deserialize a JSON string into an Offset of the implementation-defined offset type.
deserializeStream(InputStream) - 类 中的方法org.apache.spark.serializer.DummySerializerInstance
 
deserializeStream(InputStream) - 类 中的方法org.apache.spark.serializer.SerializerInstance
 
deserializeViaNestedStream(InputStream, SerializerInstance, Function1<DeserializationStream, BoxedUnit>) - 类 中的静态方法org.apache.spark.util.Utils
Deserialize via nested stream using specific serializer
destroy() - 类 中的方法org.apache.spark.broadcast.Broadcast
Destroy all data and metadata related to this broadcast variable.
destroy() - 类 中的方法org.apache.spark.ui.HttpSecurityFilter
 
details() - 类 中的方法org.apache.spark.scheduler.StageInfo
 
details() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
DETERMINATE() - 类 中的静态方法org.apache.spark.rdd.DeterministicLevel
 
determineBounds(ArrayBuffer<Tuple2<K, Object>>, int, Ordering<K>, ClassTag<K>) - 类 中的静态方法org.apache.spark.RangePartitioner
Determines the bounds for range partitioning from candidates with weights indicating how many items each represents.
DetermineTableStats - org.apache.spark.sql.hive中的类
 
DetermineTableStats(SparkSession) - 类 的构造器org.apache.spark.sql.hive.DetermineTableStats
 
deterministic() - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Returns true iff this function is deterministic, i.e. given the same input, always return the same output.
deterministic() - 类 中的方法org.apache.spark.sql.expressions.UserDefinedFunction
Returns true iff the UDF is deterministic, i.e. the UDF produces the same output given the same input.
DeterministicLevel - org.apache.spark.rdd中的类
The deterministic level of RDD's output (i.e. what RDD#compute returns).
DeterministicLevel() - 类 的构造器org.apache.spark.rdd.DeterministicLevel
 
deviance(double, double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
 
deviance(double, double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
 
deviance(double, double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
 
deviance(double, double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
 
deviance() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
devianceResiduals() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
dfToCols(Dataset<Row>) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
dfToRowRDD(Dataset<Row>) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
dgemm(double, DenseMatrix<Object>, DenseMatrix<Object>, double, DenseMatrix<Object>) - 类 中的静态方法org.apache.spark.ml.ann.BreezeUtil
DGEMM: C := alpha * A * B + beta * C
dgemv(double, DenseMatrix<Object>, DenseVector<Object>, double, DenseVector<Object>) - 类 中的静态方法org.apache.spark.ml.ann.BreezeUtil
DGEMV: y := alpha * A * x + beta * y
diag(Vector) - 类 中的静态方法org.apache.spark.ml.linalg.DenseMatrix
Generate a diagonal matrix in DenseMatrix format from the supplied values.
diag(Vector) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Generate a diagonal matrix in Matrix format from the supplied values.
diag(Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.DenseMatrix
Generate a diagonal matrix in DenseMatrix format from the supplied values.
diag(Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Generate a diagonal matrix in Matrix format from the supplied values.
diff(RDD<Tuple2<Object, VD>>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
diff(VertexRDD<VD>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
diff(RDD<Tuple2<Object, VD>>) - 类 中的方法org.apache.spark.graphx.VertexRDD
For each vertex present in both this and other, diff returns only those vertices with differing values; for values that are different, keeps the values from other.
diff(VertexRDD<VD>) - 类 中的方法org.apache.spark.graphx.VertexRDD
For each vertex present in both this and other, diff returns only those vertices with differing values; for values that are different, keeps the values from other.
DifferentiableLossAggregator<Datum,Agg extends DifferentiableLossAggregator<Datum,Agg>> - org.apache.spark.ml.optim.aggregator中的接口
A parent trait for aggregators used in fitting MLlib models.
DifferentiableRegularization<T> - org.apache.spark.ml.optim.loss中的接口
A Breeze diff function which represents a cost function for differentiable regularization of parameters. e.g.
dim() - 接口 中的方法org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
The dimension of the gradient array.
dir() - 类 中的方法org.apache.spark.mllib.optimization.NNLS.Workspace
 
directory(File) - 类 中的方法org.apache.spark.launcher.SparkLauncher
Sets the working directory of spark-submit.
DirectPoolMemory - org.apache.spark.metrics中的类
 
DirectPoolMemory() - 类 的构造器org.apache.spark.metrics.DirectPoolMemory
 
disableOutputSpecValidation() - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriterUtils
Allows for the spark.hadoop.validateOutputSpecs checks to be disabled on a case-by-case basis; see SPARK-4835 for more details.
disconnect() - 接口 中的方法org.apache.spark.launcher.SparkAppHandle
Disconnects the handle from the application, without stopping it.
DISCOVERY_SCRIPT() - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
DISK_BYTES_SPILLED() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
DISK_ONLY - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
DISK_ONLY() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
DISK_ONLY_2 - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
DISK_ONLY_2() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
DISK_SPILL() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
DiskBlockData - org.apache.spark.storage中的类
 
DiskBlockData(long, long, File, long) - 类 的构造器org.apache.spark.storage.DiskBlockData
 
diskBytesSpilled() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
diskBytesSpilled() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
diskBytesSpilled() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
diskBytesSpilled() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
diskSize() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
diskSize() - 类 中的方法org.apache.spark.storage.BlockStatus
 
diskSize() - 类 中的方法org.apache.spark.storage.BlockUpdatedInfo
 
diskSize() - 类 中的方法org.apache.spark.storage.RDDInfo
 
diskUsed() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
diskUsed() - 类 中的方法org.apache.spark.status.api.v1.RDDDataDistribution
 
diskUsed() - 类 中的方法org.apache.spark.status.api.v1.RDDPartitionInfo
 
diskUsed() - 类 中的方法org.apache.spark.status.api.v1.RDDStorageInfo
 
diskUsed() - 类 中的方法org.apache.spark.status.LiveExecutor
 
diskUsed() - 类 中的方法org.apache.spark.status.LiveRDD
 
diskUsed() - 类 中的方法org.apache.spark.status.LiveRDDDistribution
 
diskUsed() - 类 中的方法org.apache.spark.status.LiveRDDPartition
 
dispersion() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
dispose() - 接口 中的方法org.apache.spark.storage.BlockData
 
dispose() - 类 中的方法org.apache.spark.storage.DiskBlockData
 
dispose(ByteBuffer) - 类 中的静态方法org.apache.spark.storage.StorageUtils
Attempt to clean up a ByteBuffer if it is direct or memory-mapped.
distanceMeasure() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
distanceMeasure() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
distanceMeasure() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
distanceMeasure() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
distanceMeasure() - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
param for distance measure to be used in evaluation (supports "squaredEuclidean" (default), "cosine")
distanceMeasure() - 接口 中的方法org.apache.spark.ml.param.shared.HasDistanceMeasure
Param for The distance measure.
distanceMeasure() - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
 
distanceMeasure() - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel
 
distinct() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD containing the distinct elements in this RDD.
distinct(int) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD containing the distinct elements in this RDD.
distinct() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a new RDD containing the distinct elements in this RDD.
distinct(int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a new RDD containing the distinct elements in this RDD.
distinct() - 类 中的方法org.apache.spark.api.java.JavaRDD
Return a new RDD containing the distinct elements in this RDD.
distinct(int) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return a new RDD containing the distinct elements in this RDD.
distinct(int, Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Return a new RDD containing the distinct elements in this RDD.
distinct() - 类 中的方法org.apache.spark.rdd.RDD
Return a new RDD containing the distinct elements in this RDD.
distinct() - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset that contains only the unique rows from this Dataset.
distinct(Column...) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Creates a Column for this UDAF using the distinct values of the given Columns as input arguments.
distinct(Seq<Column>) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Creates a Column for this UDAF using the distinct values of the given Columns as input arguments.
DistributedLDAModel - org.apache.spark.ml.clustering中的类
Distributed model fitted by LDA.
DistributedLDAModel - org.apache.spark.mllib.clustering中的类
Distributed LDA model.
DistributedMatrix - org.apache.spark.mllib.linalg.distributed中的接口
Represents a distributively stored matrix backed by one or more RDDs.
Distribution - org.apache.spark.sql.connector.read.partitioning中的接口
An interface to represent data distribution requirement, which specifies how the records should be distributed among the data partitions (one PartitionReader outputs data for one partition).
distribution(LiveExecutor) - 类 中的方法org.apache.spark.status.LiveRDD
 
distributionOpt(LiveExecutor) - 类 中的方法org.apache.spark.status.LiveRDD
 
div(Decimal, Decimal) - 类 中的方法org.apache.spark.sql.types.Decimal.DecimalIsFractional$
 
div(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
div(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
div(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
divide(Object) - 类 中的方法org.apache.spark.sql.Column
Division this expression by another expression.
doc() - 类 中的方法org.apache.spark.ml.param.Param
 
docConcentration() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
docConcentration() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
docConcentration() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
docConcentration() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
docConcentration() - 类 中的方法org.apache.spark.mllib.clustering.LDAModel
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
docConcentration() - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
 
docFreq() - 类 中的方法org.apache.spark.ml.feature.IDFModel
Returns the document frequency
docFreq() - 类 中的方法org.apache.spark.mllib.feature.IDFModel
 
DocumentFrequencyAggregator(int) - 类 的构造器org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
 
DocumentFrequencyAggregator() - 类 的构造器org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
 
doesDirectoryContainAnyNewFiles(File, long) - 类 中的静态方法org.apache.spark.util.Utils
Determines if a directory contains any files newer than cutoff seconds.
doFetchFile(String, File, String, SparkConf, org.apache.spark.SecurityManager, Configuration) - 类 中的静态方法org.apache.spark.util.Utils
Download a file or directory to target directory.
doFilter(ServletRequest, ServletResponse, FilterChain) - 类 中的方法org.apache.spark.ui.HttpSecurityFilter
 
doPostEvent(SparkListenerInterface, SparkListenerEvent) - 接口 中的方法org.apache.spark.scheduler.SparkListenerBus
 
doPostEvent(L, E) - 接口 中的方法org.apache.spark.util.ListenerBus
Post an event to the specified listener.
Dot - org.apache.spark.ml.feature中的类
 
Dot() - 类 的构造器org.apache.spark.ml.feature.Dot
 
dot(Vector, Vector) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
dot(x, y)
dot(Vector) - 接口 中的方法org.apache.spark.ml.linalg.Vector
Calculate the dot product of this vector with another.
dot(Vector, Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
dot(x, y)
dot(Vector) - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Calculate the dot product of this vector with another.
doTest(DStream<Tuple2<StatCounter, StatCounter>>) - 接口 中的方法org.apache.spark.mllib.stat.test.StreamingTestMethod
Perform streaming 2-sample statistical significance testing.
doTest(DStream<Tuple2<StatCounter, StatCounter>>) - 类 中的静态方法org.apache.spark.mllib.stat.test.StudentTTest
 
doTest(DStream<Tuple2<StatCounter, StatCounter>>) - 类 中的静态方法org.apache.spark.mllib.stat.test.WelchTTest
 
DOUBLE() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable double type.
doubleAccumulator() - 类 中的方法org.apache.spark.SparkContext
Create and register a double accumulator, which starts with 0 and accumulates inputs by add.
doubleAccumulator(String) - 类 中的方法org.apache.spark.SparkContext
Create and register a double accumulator, which starts with 0 and accumulates inputs by add.
DoubleAccumulator - org.apache.spark.util中的类
An accumulator for computing sum, count, and averages for double precision floating numbers.
DoubleAccumulator() - 类 的构造器org.apache.spark.util.DoubleAccumulator
 
DoubleAccumulatorSource - org.apache.spark.metrics.source中的类
 
DoubleAccumulatorSource() - 类 的构造器org.apache.spark.metrics.source.DoubleAccumulatorSource
 
DoubleArrayArrayParam - org.apache.spark.ml.param中的类
:: DeveloperApi :: Specialized version of Param[Array[Array[Double}] for Java.
DoubleArrayArrayParam(Params, String, String, Function1<double[][], Object>) - 类 的构造器org.apache.spark.ml.param.DoubleArrayArrayParam
 
DoubleArrayArrayParam(Params, String, String) - 类 的构造器org.apache.spark.ml.param.DoubleArrayArrayParam
 
DoubleArrayParam - org.apache.spark.ml.param中的类
:: DeveloperApi :: Specialized version of Param[Array[Double} for Java.
DoubleArrayParam(Params, String, String, Function1<double[], Object>) - 类 的构造器org.apache.spark.ml.param.DoubleArrayParam
 
DoubleArrayParam(Params, String, String) - 类 的构造器org.apache.spark.ml.param.DoubleArrayParam
 
DoubleExactNumeric - org.apache.spark.sql.types中的类
 
DoubleExactNumeric() - 类 的构造器org.apache.spark.sql.types.DoubleExactNumeric
 
DoubleFlatMapFunction<T> - org.apache.spark.api.java.function中的接口
A function that returns zero or more records of type Double from each input record.
DoubleFunction<T> - org.apache.spark.api.java.function中的接口
A function that returns Doubles, and can be used to construct DoubleRDDs.
DoubleParam - org.apache.spark.ml.param中的类
:: DeveloperApi :: Specialized version of Param[Double] for Java.
DoubleParam(String, String, String, Function1<Object, Object>) - 类 的构造器org.apache.spark.ml.param.DoubleParam
 
DoubleParam(String, String, String) - 类 的构造器org.apache.spark.ml.param.DoubleParam
 
DoubleParam(Identifiable, String, String, Function1<Object, Object>) - 类 的构造器org.apache.spark.ml.param.DoubleParam
 
DoubleParam(Identifiable, String, String) - 类 的构造器org.apache.spark.ml.param.DoubleParam
 
DoubleRDDFunctions - org.apache.spark.rdd中的类
Extra functions available on RDDs of Doubles through an implicit conversion.
DoubleRDDFunctions(RDD<Object>) - 类 的构造器org.apache.spark.rdd.DoubleRDDFunctions
 
doubleRDDToDoubleRDDFunctions(RDD<Object>) - 类 中的静态方法org.apache.spark.rdd.RDD
 
DoubleType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the DoubleType object.
DoubleType - org.apache.spark.sql.types中的类
The data type representing Double values.
DoubleType() - 类 的构造器org.apache.spark.sql.types.DoubleType
 
DRIVER() - 类 中的静态方法org.apache.spark.metrics.MetricsSystemInstances
 
driver() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
 
driver() - 接口 中的方法org.apache.spark.shuffle.api.ShuffleDataIO
Called once on driver process to bootstrap the shuffle metadata modules that are maintained by the driver.
DRIVER_DEFAULT_JAVA_OPTIONS - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the default driver VM options.
DRIVER_EXTRA_CLASSPATH - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the driver class path.
DRIVER_EXTRA_JAVA_OPTIONS - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the driver VM options.
DRIVER_EXTRA_LIBRARY_PATH - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the driver native library path.
DRIVER_LOG_CLEANER_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.History
 
DRIVER_LOG_CLEANER_INTERVAL() - 类 中的静态方法org.apache.spark.internal.config.History
 
DRIVER_MEMORY - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the driver memory.
DRIVER_WAL_BATCHING_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
DRIVER_WAL_BATCHING_TIMEOUT_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
DRIVER_WAL_CLASS_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
DRIVER_WAL_CLOSE_AFTER_WRITE_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
DRIVER_WAL_MAX_FAILURES_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
DRIVER_WAL_ROLLING_INTERVAL_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
driverAttributes() - 类 中的方法org.apache.spark.scheduler.SparkListenerApplicationStart
 
driverLogs() - 类 中的方法org.apache.spark.scheduler.SparkListenerApplicationStart
 
drop() - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing any null or NaN values.
drop(String) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing null or NaN values.
drop(String[]) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing any null or NaN values in the specified columns.
drop(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that drops rows containing any null or NaN values in the specified columns.
drop(String, String[]) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing null or NaN values in the specified columns.
drop(String, Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that drops rows containing null or NaN values in the specified columns.
drop(int) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing less than minNonNulls non-null and non-NaN values.
drop(int, String[]) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing less than minNonNulls non-null and non-NaN values in the specified columns.
drop(int, Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that drops rows containing less than minNonNulls non-null and non-NaN values in the specified columns.
drop(String...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with columns dropped.
drop(String) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with a column dropped.
drop(Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with columns dropped.
drop(Column) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with a column dropped.
dropDatabase(String, boolean, boolean) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Drop the specified database, if it exists.
dropDuplicates(String, String...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with duplicate rows removed, considering only the subset of columns.
dropDuplicates() - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset that contains only the unique rows from this Dataset.
dropDuplicates(Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Returns a new Dataset with duplicate rows removed, considering only the subset of columns.
dropDuplicates(String[]) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with duplicate rows removed, considering only the subset of columns.
dropDuplicates(String, Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with duplicate rows removed, considering only the subset of columns.
dropFromMemory(BlockId, Function0<Either<Object, org.apache.spark.util.io.ChunkedByteBuffer>>, ClassTag<T>) - 接口 中的方法org.apache.spark.storage.memory.BlockEvictionHandler
Drop a block from memory, possibly putting it on disk if applicable.
dropFunction(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Drop an existing function in the database.
dropGlobalTempView(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Drops the global temporary view with the given view name in the catalog.
dropLast() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
dropLast() - 接口 中的方法org.apache.spark.ml.feature.OneHotEncoderBase
Whether to drop the last category in the encoded vector (default: true)
dropLast() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
dropNamespace(String[]) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
dropNamespace(String[]) - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsNamespaces
Drop a namespace from the catalog.
dropPartitions(String, String, Seq<Map<String, String>>, boolean, boolean, boolean) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Drop one or many partitions in the given table, assuming they exist.
dropTable(Identifier) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
dropTable(Identifier) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableCatalog
Drop a table in the catalog.
dropTable(String, String, boolean, boolean) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Drop the specified table.
dropTempTable(String) - 类 中的方法org.apache.spark.sql.SQLContext
Drops the temporary table with the given table name in the catalog.
dropTempView(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Drops the local temporary view with the given view name in the catalog.
dspmv(int, double, DenseVector, DenseVector, double, DenseVector) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
y := alpha*A*x + beta*y
Dst - 类 中的静态变量org.apache.spark.graphx.TripletFields
Expose the destination and edge fields but not the source field.
dstAttr() - 类 中的方法org.apache.spark.graphx.EdgeContext
The vertex attribute of the edge's destination vertex.
dstAttr() - 类 中的方法org.apache.spark.graphx.EdgeTriplet
The destination vertex attribute
dstAttr() - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
dstCol() - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
dstCol() - 接口 中的方法org.apache.spark.ml.clustering.PowerIterationClusteringParams
Name of the input column for destination vertex IDs.
dstId() - 类 中的方法org.apache.spark.graphx.Edge
 
dstId() - 类 中的方法org.apache.spark.graphx.EdgeContext
The vertex id of the edge's destination vertex.
dstId() - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
dstream() - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
 
dstream() - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
 
dstream() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
 
DStream<T> - org.apache.spark.streaming.dstream中的类
A Discretized Stream (DStream), the basic abstraction in Spark Streaming, is a continuous sequence of RDDs (of the same type) representing a continuous stream of data (see org.apache.spark.rdd.RDD in the Spark core documentation for more details on RDDs).
DStream(StreamingContext, ClassTag<T>) - 类 的构造器org.apache.spark.streaming.dstream.DStream
 
dtypes() - 类 中的方法org.apache.spark.sql.Dataset
Returns all column names and their data types as an array.
DummySerializerInstance - org.apache.spark.serializer中的类
Unfortunately, we need a serializer instance in order to construct a DiskBlockObjectWriter.
duration() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
duration() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
duration() - 类 中的方法org.apache.spark.status.api.v1.streaming.OutputOperationInfo
 
duration() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
DURATION() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
Duration - org.apache.spark.streaming中的类
 
Duration(long) - 类 的构造器org.apache.spark.streaming.Duration
 
duration() - 类 中的方法org.apache.spark.streaming.scheduler.OutputOperationInfo
Return the duration of this output operation.
durationMs() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
Durations - org.apache.spark.streaming中的类
 
Durations() - 类 的构造器org.apache.spark.streaming.Durations
 

E

Edge<ED> - org.apache.spark.graphx中的类
A single directed edge consisting of a source id, target id, and the data associated with the edge.
Edge(long, long, ED) - 类 的构造器org.apache.spark.graphx.Edge
 
EdgeActiveness - org.apache.spark.graphx.impl中的枚举
Criteria for filtering edges based on activeness.
EdgeContext<VD,ED,A> - org.apache.spark.graphx中的类
Represents an edge along with its neighboring vertices and allows sending messages along the edge.
EdgeContext() - 类 的构造器org.apache.spark.graphx.EdgeContext
 
EdgeDirection - org.apache.spark.graphx中的类
The direction of a directed edge relative to a vertex.
EdgeDirection() - 类 的构造器org.apache.spark.graphx.EdgeDirection
 
edgeListFile(SparkContext, String, boolean, int, StorageLevel, StorageLevel) - 类 中的静态方法org.apache.spark.graphx.GraphLoader
Loads a graph from an edge list formatted file where each line contains two integers: a source id and a target id.
EdgeOnly - 类 中的静态变量org.apache.spark.graphx.TripletFields
Expose only the edge field and not the source or destination field.
EdgePartition1D$() - 类 的构造器org.apache.spark.graphx.PartitionStrategy.EdgePartition1D$
 
EdgePartition2D$() - 类 的构造器org.apache.spark.graphx.PartitionStrategy.EdgePartition2D$
 
EdgeRDD<ED> - org.apache.spark.graphx中的类
EdgeRDD[ED, VD] extends RDD[Edge[ED} by storing the edges in columnar format on each partition for performance.
EdgeRDD(SparkContext, Seq<Dependency<?>>) - 类 的构造器org.apache.spark.graphx.EdgeRDD
 
EdgeRDDImpl<ED,VD> - org.apache.spark.graphx.impl中的类
 
edges() - 类 中的方法org.apache.spark.graphx.Graph
An RDD containing the edges and their associated attributes.
edges() - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
EdgeTriplet<VD,ED> - org.apache.spark.graphx中的类
An edge triplet represents an edge along with the vertex attributes of its neighboring vertices.
EdgeTriplet() - 类 的构造器org.apache.spark.graphx.EdgeTriplet
 
EigenValueDecomposition - org.apache.spark.mllib.linalg中的类
Compute eigen-decomposition.
EigenValueDecomposition() - 类 的构造器org.apache.spark.mllib.linalg.EigenValueDecomposition
 
Either() - 类 中的静态方法org.apache.spark.graphx.EdgeDirection
Edges originating from *or* arriving at a vertex of interest.
elasticNetParam() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
elasticNetParam() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
elasticNetParam() - 接口 中的方法org.apache.spark.ml.param.shared.HasElasticNetParam
Param for the ElasticNet mixing parameter, in range [0, 1].
elasticNetParam() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
elasticNetParam() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
elem(String, Function1<Object, Object>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
elem(Parsers) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
element_at(Column, Object) - 类 中的静态方法org.apache.spark.sql.functions
Returns element of array at given index in value if column is array.
elementType() - 类 中的方法org.apache.spark.sql.types.ArrayType
 
ElementwiseProduct - org.apache.spark.ml.feature中的类
Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector.
ElementwiseProduct(String) - 类 的构造器org.apache.spark.ml.feature.ElementwiseProduct
 
ElementwiseProduct() - 类 的构造器org.apache.spark.ml.feature.ElementwiseProduct
 
ElementwiseProduct - org.apache.spark.mllib.feature中的类
Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector.
ElementwiseProduct(Vector) - 类 的构造器org.apache.spark.mllib.feature.ElementwiseProduct
 
elems() - 类 中的方法org.apache.spark.status.api.v1.StackTrace
 
EMLDAOptimizer - org.apache.spark.mllib.clustering中的类
:: DeveloperApi :: Optimizer for EM algorithm which stores data + parameter graph, plus algorithm parameters.
EMLDAOptimizer() - 类 的构造器org.apache.spark.mllib.clustering.EMLDAOptimizer
 
empty() - 类 中的静态方法org.apache.spark.api.java.Optional
 
empty() - 类 中的静态方法org.apache.spark.ml.param.ParamMap
Returns an empty param map.
empty() - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan.Prefix$
An empty Prefix instance.
empty() - 类 中的静态方法org.apache.spark.sql.types.Metadata
Returns an empty Metadata.
empty() - 类 中的静态方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
empty() - 类 中的静态方法org.apache.spark.storage.BlockStatus
 
EMPTY_USER_GROUPS() - 类 中的静态方法org.apache.spark.util.Utils
 
emptyDataFrame() - 类 中的方法org.apache.spark.sql.SparkSession
 
emptyDataFrame() - 类 中的方法org.apache.spark.sql.SQLContext
Returns a DataFrame with no rows or columns.
emptyDataset(Encoder<T>) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a new Dataset of type T containing zero elements.
emptyNode(int) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Return a node with the given node id (but nothing else set).
emptyRDD() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get an RDD that has no partitions or elements.
emptyRDD(ClassTag<T>) - 类 中的方法org.apache.spark.SparkContext
Get an RDD that has no partitions or elements.
EmptyTaskCommitMessage$() - 类 的构造器org.apache.spark.internal.io.FileCommitProtocol.EmptyTaskCommitMessage$
 
EmptyTerm - org.apache.spark.ml.feature中的类
Placeholder term for the result of undefined interactions, e.g. '1:1' or 'a:1'
EmptyTerm() - 类 的构造器org.apache.spark.ml.feature.EmptyTerm
 
enableHiveSupport() - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Enables Hive support, including connectivity to a persistent Hive metastore, support for Hive serdes, and Hive user-defined functions.
enableReceiverLog(SparkConf) - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
encode(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the first argument into a binary from a string using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16').
encodeFileNameToURIRawPath(String) - 类 中的静态方法org.apache.spark.util.Utils
A file name may contain some invalid URI characters, such as " ".
encoder() - 类 中的方法org.apache.spark.sql.Dataset
 
Encoder<T> - org.apache.spark.sql中的接口
Used to convert a JVM object of type T to and from the internal Spark SQL representation.
Encoders - org.apache.spark.sql中的类
Methods for creating an Encoder.
Encoders() - 类 的构造器org.apache.spark.sql.Encoders
 
END_EVENT_REPARSE_CHUNK_SIZE() - 类 中的静态方法org.apache.spark.internal.config.History
 
endOffset() - 类 中的方法org.apache.spark.sql.streaming.SourceProgress
 
endOffset() - 异常错误 中的方法org.apache.spark.sql.streaming.StreamingQueryException
 
endReduceId() - 类 中的方法org.apache.spark.storage.ShuffleBlockBatchId
 
endsWith(Column) - 类 中的方法org.apache.spark.sql.Column
String ends with.
endsWith(String) - 类 中的方法org.apache.spark.sql.Column
String ends with another string literal.
endTime() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
endTime() - 类 中的方法org.apache.spark.status.api.v1.streaming.OutputOperationInfo
 
endTime() - 类 中的方法org.apache.spark.streaming.scheduler.OutputOperationInfo
 
EnsembleCombiningStrategy - org.apache.spark.mllib.tree.configuration中的类
Enum to select ensemble combining strategy for base learners
EnsembleCombiningStrategy() - 类 的构造器org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
EnsembleModelReadWrite - org.apache.spark.ml.tree中的类
 
EnsembleModelReadWrite() - 类 的构造器org.apache.spark.ml.tree.EnsembleModelReadWrite
 
EnsembleModelReadWrite.EnsembleNodeData - org.apache.spark.ml.tree中的类
Info for one Node in a tree ensemble param: treeID Tree index param: nodeData Data for this node
EnsembleModelReadWrite.EnsembleNodeData$ - org.apache.spark.ml.tree中的类
 
EnsembleNodeData(int, DecisionTreeModelReadWrite.NodeData) - 类 的构造器org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
 
EnsembleNodeData$() - 类 的构造器org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
 
entries() - 类 中的方法org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
 
Entropy - org.apache.spark.mllib.tree.impurity中的类
Class for calculating entropy during multiclass classification.
Entropy() - 类 的构造器org.apache.spark.mllib.tree.impurity.Entropy
 
entrySet() - 类 中的方法org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
 
entrySet() - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
EnumUtil - org.apache.spark.util中的类
 
EnumUtil() - 类 的构造器org.apache.spark.util.EnumUtil
 
environmentDetails() - 类 中的方法org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
 
environmentUpdateFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
environmentUpdateToJson(SparkListenerEnvironmentUpdate) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
eps() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
EPSILON() - 类 中的静态方法org.apache.spark.ml.impl.Utils
 
epsilon() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
epsilon() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
epsilon() - 接口 中的方法org.apache.spark.ml.regression.LinearRegressionParams
The shape parameter to control the amount of robustness.
eqNullSafe(Object) - 类 中的方法org.apache.spark.sql.Column
Equality test that is safe for null values.
EqualNullSafe - org.apache.spark.sql.sources中的类
Performs equality comparison, similar to EqualTo.
EqualNullSafe(String, Object) - 类 的构造器org.apache.spark.sql.sources.EqualNullSafe
 
equals(Object) - 类 中的方法org.apache.spark.api.java.Optional
 
equals(Object) - 类 中的静态方法org.apache.spark.ExpireDeadHosts
 
equals(Object) - 类 中的方法org.apache.spark.graphx.EdgeDirection
 
equals(Object) - 类 中的方法org.apache.spark.HashPartitioner
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.DirectPoolMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.GarbageCollectionMetrics
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.JVMHeapMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.JVMOffHeapMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.MappedPoolMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.OffHeapExecutionMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.OffHeapStorageMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.OffHeapUnifiedMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.OnHeapExecutionMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.OnHeapStorageMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.OnHeapUnifiedMemory
 
equals(Object) - 类 中的静态方法org.apache.spark.metrics.ProcessTreeMetrics
 
equals(Object) - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
 
equals(Object) - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
equals(Object) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
equals(Object) - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
equals(Object) - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
equals(Object) - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
equals(Object) - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
equals(Object) - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
equals(Object) - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
equals(Object) - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
equals(Object) - 接口 中的方法org.apache.spark.ml.linalg.Vector
 
equals(Object) - 类 中的方法org.apache.spark.ml.param.Param
 
equals(Object) - 类 中的方法org.apache.spark.ml.tree.CategoricalSplit
 
equals(Object) - 类 中的方法org.apache.spark.ml.tree.ContinuousSplit
 
equals(Object) - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
equals(Object) - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
equals(Object) - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
equals(Object) - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
equals(Object) - 接口 中的方法org.apache.spark.mllib.linalg.Vector
 
equals(Object) - 类 中的方法org.apache.spark.mllib.linalg.VectorUDT
 
equals(Object) - 类 中的方法org.apache.spark.mllib.tree.model.InformationGainStats
 
equals(Object) - 类 中的方法org.apache.spark.mllib.tree.model.Predict
 
equals(Object) - 类 中的方法org.apache.spark.partial.BoundedDouble
 
equals(Object) - 接口 中的方法org.apache.spark.Partition
 
equals(Object) - 类 中的方法org.apache.spark.RangePartitioner
 
equals(Object) - 类 中的方法org.apache.spark.resource.ResourceInformation
 
equals(Object) - 类 中的静态方法org.apache.spark.Resubmitted
 
equals(Object) - 类 中的静态方法org.apache.spark.rpc.netty.OnStart
 
equals(Object) - 类 中的静态方法org.apache.spark.rpc.netty.OnStop
 
equals(Object) - 类 中的静态方法org.apache.spark.scheduler.AllJobsCancelled
 
equals(Object) - 类 中的方法org.apache.spark.scheduler.cluster.ExecutorInfo
 
equals(Object) - 类 中的方法org.apache.spark.scheduler.InputFormatInfo
 
equals(Object) - 类 中的静态方法org.apache.spark.scheduler.JobSucceeded
 
equals(Object) - 类 中的静态方法org.apache.spark.scheduler.ResubmitFailedStages
 
equals(Object) - 类 中的方法org.apache.spark.scheduler.SplitInfo
 
equals(Object) - 类 中的静态方法org.apache.spark.scheduler.StopCoordinator
 
equals(Object) - 类 中的方法org.apache.spark.sql.Column
 
equals(Object) - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.AddColumn
 
equals(Object) - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.DeleteColumn
 
equals(Object) - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.RemoveProperty
 
equals(Object) - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.RenameColumn
 
equals(Object) - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.SetProperty
 
equals(Object) - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnComment
 
equals(Object) - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType
 
equals(Object) - 类 中的方法org.apache.spark.sql.connector.read.streaming.Offset
Equality based on JSON string representation.
equals(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
equals(Object) - 接口 中的方法org.apache.spark.sql.Row
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.sources.AlwaysFalse
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.sources.AlwaysTrue
 
equals(Object) - 类 中的方法org.apache.spark.sql.sources.In
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.DateType
 
equals(Object) - 类 中的方法org.apache.spark.sql.types.Decimal
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.LongType
 
equals(Object) - 类 中的方法org.apache.spark.sql.types.Metadata
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.NullType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.StringType
 
equals(Object) - 类 中的方法org.apache.spark.sql.types.StructType
 
equals(Object) - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
equals(Object) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
equals(Object) - 类 中的静态方法org.apache.spark.StopMapOutputTracker
 
equals(Object) - 类 中的方法org.apache.spark.storage.BlockManagerId
 
equals(Object) - 类 中的方法org.apache.spark.storage.StorageLevel
 
equals(Object) - 类 中的静态方法org.apache.spark.streaming.kinesis.DefaultCredentials
 
equals(Object) - 类 中的静态方法org.apache.spark.streaming.scheduler.AllReceiverIds
 
equals(Object) - 类 中的静态方法org.apache.spark.streaming.scheduler.GetAllReceiverInfo
 
equals(Object) - 类 中的静态方法org.apache.spark.streaming.scheduler.StopAllReceivers
 
equals(Object) - 类 中的静态方法org.apache.spark.Success
 
equals(Object) - 类 中的静态方法org.apache.spark.TaskResultLost
 
equals(Object) - 类 中的静态方法org.apache.spark.TaskSchedulerIsSet
 
equals(Object) - 类 中的静态方法org.apache.spark.UnknownReason
 
equalsStructurally(DataType, DataType, boolean) - 类 中的静态方法org.apache.spark.sql.types.DataType
Returns true if the two data types share the same "shape", i.e. the types are the same, but the field names don't need to be the same.
equalTo(Object) - 类 中的方法org.apache.spark.sql.Column
Equality test.
EqualTo - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to a value equal to value.
EqualTo(String, Object) - 类 的构造器org.apache.spark.sql.sources.EqualTo
 
equiv(T, T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
equiv(T, T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
equiv(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
equiv(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
equiv(T, T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
equiv(T, T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
equiv(T, T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
err(String) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
Error() - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
ERROR() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
ErrorHandlingReadableChannel(ReadableByteChannel, ReadableByteChannel) - 类 的构造器org.apache.spark.security.CryptoStreamUtils.ErrorHandlingReadableChannel
 
errorMessage() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
errorMessage() - 类 中的方法org.apache.spark.status.LiveTask
 
estimate(double[]) - 类 中的方法org.apache.spark.mllib.stat.KernelDensity
Estimates probability density function at the given array of points.
estimate(Object) - 类 中的静态方法org.apache.spark.util.SizeEstimator
Estimate the number of bytes that the given object takes up on the JVM heap.
estimateCount(Object) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Returns the estimated frequency of item.
estimatedDocConcentration() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
Value for docConcentration estimated from data.
estimatedSize() - 类 中的方法org.apache.spark.storage.memory.DeserializedValuesHolder
 
estimatedSize() - 类 中的方法org.apache.spark.storage.memory.SerializedValuesHolder
 
estimatedSize() - 接口 中的方法org.apache.spark.storage.memory.ValuesHolder
 
estimatedSize() - 接口 中的方法org.apache.spark.util.KnownSizeEstimation
 
estimateStatistics() - 接口 中的方法org.apache.spark.sql.connector.read.SupportsReportStatistics
Returns the estimated statistics of this data source scan.
Estimator<M extends Model<M>> - org.apache.spark.ml中的类
:: DeveloperApi :: Abstract class for estimators that fit models to data.
Estimator() - 类 的构造器org.apache.spark.ml.Estimator
 
estimator() - 类 中的方法org.apache.spark.ml.FitEnd
 
estimator() - 类 中的方法org.apache.spark.ml.FitStart
 
estimator() - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
estimator() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
estimator() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
estimator() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
estimator() - 接口 中的方法org.apache.spark.ml.tuning.ValidatorParams
param for the estimator to be validated
estimatorParamMaps() - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
estimatorParamMaps() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
estimatorParamMaps() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
estimatorParamMaps() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
estimatorParamMaps() - 接口 中的方法org.apache.spark.ml.tuning.ValidatorParams
param for estimator param maps
eval() - 接口 中的方法org.apache.spark.ml.ann.ActivationFunction
Implements a function
eval(DenseMatrix<Object>, DenseMatrix<Object>) - 接口 中的方法org.apache.spark.ml.ann.LayerModel
Evaluates the data (process the data through the layer).
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
Evaluates the model on a test dataset.
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
evaluate(Dataset<?>, ParamMap) - 类 中的方法org.apache.spark.ml.evaluation.Evaluator
Evaluates model output and returns a scalar metric.
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.evaluation.Evaluator
Evaluates model output and returns a scalar metric.
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
Evaluate the model on the given dataset, returning a summary of the results.
evaluate(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
Evaluates the model on a test dataset.
evaluate(Row) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Calculates the final result of this UserDefinedAggregateFunction based on the given aggregation buffer.
evaluateEachIteration(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
Method to compute error or loss for every iteration of gradient boosting.
evaluateEachIteration(Dataset<?>, String) - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
Method to compute error or loss for every iteration of gradient boosting.
evaluateEachIteration(RDD<org.apache.spark.ml.feature.Instance>, DecisionTreeRegressionModel[], double[], Loss, Enumeration.Value) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
Method to compute error or loss for every iteration of gradient boosting.
evaluateEachIteration(RDD<LabeledPoint>, Loss) - 类 中的方法org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
Method to compute error or loss for every iteration of gradient boosting.
Evaluator - org.apache.spark.ml.evaluation中的类
:: DeveloperApi :: Abstract class for evaluators that compute metrics from predictions.
Evaluator() - 类 的构造器org.apache.spark.ml.evaluation.Evaluator
 
evaluator() - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
evaluator() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
evaluator() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
evaluator() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
evaluator() - 接口 中的方法org.apache.spark.ml.tuning.ValidatorParams
param for the evaluator used to select hyper-parameters that maximize the validated metric
eventRates() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
eventTime() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
EventTimeTimeout() - 类 中的静态方法org.apache.spark.sql.streaming.GroupStateTimeout
Timeout based on event-time.
except(Dataset<T>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset containing rows in this Dataset but not in another Dataset.
exceptAll(Dataset<T>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset containing rows in this Dataset but not in another Dataset while preserving the duplicates.
exception() - 类 中的方法org.apache.spark.ExceptionFailure
 
exception() - 类 中的方法org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread
Contains the exception thrown while writing the parent iterator to the external process.
exception() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Returns the StreamingQueryException if the query was terminated by an exception.
exception() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
 
ExceptionFailure - org.apache.spark中的类
:: DeveloperApi :: Task failed due to a runtime exception.
ExceptionFailure(String, String, StackTraceElement[], String, Option<ThrowableSerializationWrapper>, Seq<AccumulableInfo>, Seq<AccumulatorV2<?, ?>>, Seq<Object>) - 类 的构造器org.apache.spark.ExceptionFailure
 
exceptionFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
exceptionString(Throwable) - 类 中的静态方法org.apache.spark.util.Utils
Return a nice string representation of the exception.
exceptionToJson(Exception) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
EXEC_CPU_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
EXEC_RUN_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
execId() - 类 中的方法org.apache.spark.ExecutorLostFailure
 
execId() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
 
execId() - 类 中的方法org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
 
execId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
 
executeAndGetOutput(Seq<String>, File, Map<String, String>, boolean) - 类 中的静态方法org.apache.spark.util.Utils
Execute a command and get its output, throwing an exception if it yields a code other than 0.
executeCommand(Seq<String>, File, Map<String, String>, boolean) - 类 中的静态方法org.apache.spark.util.Utils
Execute a command and return the process running the command.
executionId() - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
ExecutionListenerManager - org.apache.spark.sql.util中的类
EXECUTOR() - 类 中的静态方法org.apache.spark.metrics.MetricsSystemInstances
 
executor() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
 
executor() - 接口 中的方法org.apache.spark.shuffle.api.ShuffleDataIO
Called once on executor processes to bootstrap the shuffle data storage modules that are only invoked on the executors.
EXECUTOR() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
EXECUTOR_CORES - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the number of executor CPU cores.
EXECUTOR_CPU_TIME() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
EXECUTOR_DEFAULT_JAVA_OPTIONS - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the default executor VM options.
EXECUTOR_DESERIALIZE_CPU_TIME() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
EXECUTOR_DESERIALIZE_TIME() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
EXECUTOR_EXTRA_CLASSPATH - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the executor class path.
EXECUTOR_EXTRA_JAVA_OPTIONS - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the executor VM options.
EXECUTOR_EXTRA_LIBRARY_PATH - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the executor native library path.
EXECUTOR_MEMORY - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
Configuration key for the executor memory.
EXECUTOR_RUN_TIME() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
executorAddedFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
executorAddedToJson(SparkListenerExecutorAdded) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
ExecutorAllocationClient - org.apache.spark中的接口
A client that communicates with the cluster manager to request or kill executors.
executorCpuTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
executorCpuTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
executorCpuTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
executorDeserializeCpuTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
executorDeserializeCpuTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
executorDeserializeCpuTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
executorDeserializeTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
executorDeserializeTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
executorDeserializeTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
executorFailures() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeBlacklisted
 
executorFailures() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
 
executorHeartbeatReceived(String, Tuple2<Object, Seq<AccumulatorV2<?, ?>>>[], BlockManagerId, Map<Tuple2<Object, Object>, ExecutorMetrics>) - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
Update metrics for in-progress tasks and executor metrics, and let the master know that the BlockManager is still alive.
executorHost() - 类 中的方法org.apache.spark.scheduler.cluster.ExecutorInfo
 
executorHost() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
executorHostName - 类 中的变量org.apache.spark.ExecutorPluginContext
 
executorId - 类 中的变量org.apache.spark.ExecutorPluginContext
 
executorId() - 类 中的方法org.apache.spark.ExecutorRegistered
 
executorId() - 类 中的方法org.apache.spark.ExecutorRemoved
 
executorId() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
 
executorId() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
 
executorId() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
 
executorId() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
 
executorId() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorAdded
 
executorId() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
 
executorId() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
 
executorId() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorRemoved
 
executorId() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
 
executorId() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
executorId() - 类 中的方法org.apache.spark.SparkEnv
 
executorId() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
executorId() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
executorId() - 类 中的方法org.apache.spark.status.LiveExecutor
 
executorId() - 类 中的方法org.apache.spark.status.LiveRDDDistribution
 
executorId() - 类 中的方法org.apache.spark.storage.BlockManagerId
 
executorId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
 
executorId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive
 
executorId() - 类 中的方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
executorId() - 类 中的方法org.apache.spark.ui.storage.ExecutorStreamSummary
 
executorIds() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
 
ExecutorInfo - org.apache.spark.scheduler.cluster中的类
:: DeveloperApi :: Stores information about an executor to pass from the scheduler to SparkListeners.
ExecutorInfo(String, int, Map<String, String>, Map<String, String>, Map<String, ResourceInformation>) - 类 的构造器org.apache.spark.scheduler.cluster.ExecutorInfo
 
ExecutorInfo(String, int, Map<String, String>) - 类 的构造器org.apache.spark.scheduler.cluster.ExecutorInfo
 
ExecutorInfo(String, int, Map<String, String>, Map<String, String>) - 类 的构造器org.apache.spark.scheduler.cluster.ExecutorInfo
 
executorInfo() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorAdded
 
executorInfoFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
executorInfoToJson(ExecutorInfo) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
ExecutorKilled - org.apache.spark.scheduler中的类
 
ExecutorKilled() - 类 的构造器org.apache.spark.scheduler.ExecutorKilled
 
executorLogs() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
executorLogs() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
executorLogs() - 类 中的方法org.apache.spark.status.LiveExecutor
 
executorLost(String, String, ExecutorLossReason) - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
executorLost(String, ExecutorLossReason) - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
Process a lost executor
ExecutorLostFailure - org.apache.spark中的类
:: DeveloperApi :: The task failed because the executor that it was running on was lost.
ExecutorLostFailure(String, boolean, Option<String>) - 类 的构造器org.apache.spark.ExecutorLostFailure
 
executorMetrics() - 类 中的方法org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
 
executorMetricsFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
Extract the executor metrics from JSON.
executorMetricsToJson(ExecutorMetrics) - 类 中的静态方法org.apache.spark.util.JsonProtocol
Convert executor metrics to JSON.
executorMetricsUpdateFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
executorMetricsUpdateToJson(SparkListenerExecutorMetricsUpdate) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
ExecutorMetricType - org.apache.spark.metrics中的接口
Executor metric types for executor-level metrics stored in ExecutorMetrics.
executorPct() - 类 中的方法org.apache.spark.scheduler.RuntimePercentage
 
ExecutorPlugin - org.apache.spark中的接口
A plugin which can be automatically instantiated within each Spark executor.
ExecutorPluginContext - org.apache.spark中的类
Encapsulates information about the executor when initializing ExecutorPlugin instances.
ExecutorPluginContext(MetricRegistry, SparkConf, String, String, boolean) - 类 的构造器org.apache.spark.ExecutorPluginContext
 
executorRef() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
 
ExecutorRegistered - org.apache.spark中的类
 
ExecutorRegistered(String) - 类 的构造器org.apache.spark.ExecutorRegistered
 
ExecutorRemoved - org.apache.spark中的类
 
ExecutorRemoved(String) - 类 的构造器org.apache.spark.ExecutorRemoved
 
executorRemovedFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
executorRemovedToJson(SparkListenerExecutorRemoved) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
executorRunTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
executorRunTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
executorRunTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
executors() - 类 中的方法org.apache.spark.status.api.v1.RDDPartitionInfo
 
executors() - 类 中的方法org.apache.spark.status.LiveRDDPartition
 
ExecutorStageSummary - org.apache.spark.status.api.v1中的类
 
ExecutorStreamSummary - org.apache.spark.ui.storage中的类
 
ExecutorStreamSummary(Seq<org.apache.spark.status.StreamBlockData>) - 类 的构造器org.apache.spark.ui.storage.ExecutorStreamSummary
 
executorSummaries() - 类 中的方法org.apache.spark.status.LiveStage
 
ExecutorSummary - org.apache.spark.status.api.v1中的类
 
executorSummary() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
executorSummary(String) - 类 中的方法org.apache.spark.status.LiveStage
 
executorUpdates() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
 
exists(Column, Function1<Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns whether a predicate holds for one or more elements in the array.
exists() - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Whether state exists or not.
exists(String) - 类 中的静态方法org.apache.spark.sql.types.UDTRegistration
Queries if a given user class is already registered or not.
exists() - 类 中的方法org.apache.spark.streaming.State
Whether the state already exists
exitCausedByApp() - 类 中的方法org.apache.spark.ExecutorLostFailure
 
exitFn() - 接口 中的方法org.apache.spark.util.CommandLineLoggingUtils
 
exp(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the exponential of the given value.
exp(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the exponential of the given column.
ExpectationAggregator - org.apache.spark.ml.clustering中的类
ExpectationAggregator computes the partial expectation results.
ExpectationAggregator(int, Broadcast<double[]>, Broadcast<Tuple2<DenseVector, DenseVector>[]>) - 类 的构造器org.apache.spark.ml.clustering.ExpectationAggregator
 
ExpectationSum - org.apache.spark.mllib.clustering中的类
 
ExpectationSum(double, double[], DenseVector<Object>[], DenseMatrix<Object>[]) - 类 的构造器org.apache.spark.mllib.clustering.ExpectationSum
 
expectedFpp() - 类 中的方法org.apache.spark.util.sketch.BloomFilter
Returns the probability that BloomFilter.mightContain(Object) erroneously return true for an object that has not actually been put in the BloomFilter.
experimental() - 类 中的方法org.apache.spark.sql.SparkSession
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
experimental() - 类 中的方法org.apache.spark.sql.SQLContext
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
ExperimentalMethods - org.apache.spark.sql中的类
:: Experimental :: Holder for experimental methods for the bravest.
ExpireDeadHosts - org.apache.spark中的类
 
ExpireDeadHosts() - 类 的构造器org.apache.spark.ExpireDeadHosts
 
expiryTime() - 类 中的方法org.apache.spark.scheduler.BlacklistedExecutor
 
explain(boolean) - 类 中的方法org.apache.spark.sql.Column
Prints the expression to the console for debugging purposes.
explain(boolean) - 类 中的方法org.apache.spark.sql.Dataset
Prints the plans (logical and physical) to the console for debugging purposes.
explain() - 类 中的方法org.apache.spark.sql.Dataset
Prints the physical plan to the console for debugging purposes.
explain() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Prints the physical plan to the console for debugging purposes.
explain(boolean) - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Prints the physical plan to the console for debugging purposes.
explainedVariance() - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
explainedVariance() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
Returns the explained variance regression score.
explainedVariance() - 类 中的方法org.apache.spark.mllib.evaluation.RegressionMetrics
Returns the variance explained by regression.
explainedVariance() - 类 中的方法org.apache.spark.mllib.feature.PCAModel
 
explainParam(Param<?>) - 接口 中的方法org.apache.spark.ml.param.Params
Explains a param.
explainParams() - 接口 中的方法org.apache.spark.ml.param.Params
Explains all params of this instance.
explode(Column) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new row for each element in the given array or map column.
explode_outer(Column) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new row for each element in the given array or map column.
explodeNestedFieldNames(StructType) - 类 中的静态方法org.apache.spark.sql.util.SchemaUtils
Returns all column names in this schema as a flat list.
expm1(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the exponential of the given value minus one.
expm1(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the exponential of the given column minus one.
ExponentialGenerator - org.apache.spark.mllib.random中的类
:: DeveloperApi :: Generates i.i.d. samples from the exponential distribution with the given mean.
ExponentialGenerator(double) - 类 的构造器org.apache.spark.mllib.random.ExponentialGenerator
 
exponentialJavaRDD(JavaSparkContext, double, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.exponentialRDD.
exponentialJavaRDD(JavaSparkContext, double, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.exponentialJavaRDD with the default seed.
exponentialJavaRDD(JavaSparkContext, double, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.exponentialJavaRDD with the default number of partitions and the default seed.
exponentialJavaVectorRDD(JavaSparkContext, double, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.exponentialVectorRDD.
exponentialJavaVectorRDD(JavaSparkContext, double, long, int, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.exponentialJavaVectorRDD with the default seed.
exponentialJavaVectorRDD(JavaSparkContext, double, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.exponentialJavaVectorRDD with the default number of partitions and the default seed.
exponentialRDD(SparkContext, double, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD comprised of i.i.d.
exponentialVectorRDD(SparkContext, double, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD[Vector] with vectors containing i.i.d.
expr() - 类 中的方法org.apache.spark.sql.Column
 
expr(String) - 类 中的静态方法org.apache.spark.sql.functions
Parses the expression string into the column that it represents, similar to Dataset.selectExpr(java.lang.String...).
Expression - org.apache.spark.sql.connector.expressions中的接口
Base class of the public logical expression API.
Expression$() - 类 的构造器org.apache.spark.sql.types.DecimalType.Expression$
 
Expressions - org.apache.spark.sql.connector.expressions中的类
Helper methods to create logical transforms to pass into Spark.
extensionsForCompressionCodecNames() - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
externalBlockStoreSize() - 类 中的方法org.apache.spark.storage.RDDInfo
 
ExternalClusterManager - org.apache.spark.scheduler中的接口
A cluster manager interface to plugin external scheduler.
externalShuffleServicePort(SparkConf) - 类 中的静态方法org.apache.spark.storage.StorageUtils
Get the port used by the external shuffle service.
extractDistribution(Function1<BatchInfo, Option<Object>>) - 类 中的方法org.apache.spark.streaming.scheduler.StatsReportListener
 
extractDoubleDistribution(Seq<Tuple2<TaskInfo, TaskMetrics>>, Function2<TaskInfo, TaskMetrics, Object>) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
extractFn() - 类 中的方法org.apache.spark.ui.JettyUtils.ServletParams
 
extractHostPortFromSparkUrl(String) - 类 中的静态方法org.apache.spark.util.Utils
Return a pair of host and port extracted from the sparkUrl.
extractInstances(Dataset<?>, int) - 接口 中的方法org.apache.spark.ml.classification.ClassifierParams
Extract labelCol, weightCol(if any) and featuresCol from the given dataset, and put it in an RDD with strong types.
extractInstances(Dataset<?>) - 接口 中的方法org.apache.spark.ml.PredictorParams
Extract labelCol, weightCol(if any) and featuresCol from the given dataset, and put it in an RDD with strong types.
extractInstances(Dataset<?>, Function1<org.apache.spark.ml.feature.Instance, BoxedUnit>) - 接口 中的方法org.apache.spark.ml.PredictorParams
Extract labelCol, weightCol(if any) and featuresCol from the given dataset, and put it in an RDD with strong types.
extractLongDistribution(Seq<Tuple2<TaskInfo, TaskMetrics>>, Function2<TaskInfo, TaskMetrics, Object>) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
extractParamMap(ParamMap) - 接口 中的方法org.apache.spark.ml.param.Params
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.
extractParamMap() - 接口 中的方法org.apache.spark.ml.param.Params
extractParamMap with no extra values.
extractWeightedLabeledPoints(Dataset<?>) - 接口 中的方法org.apache.spark.ml.regression.IsotonicRegressionBase
Extracts (label, feature, weight) from input dataset.
extraOptimizations() - 类 中的方法org.apache.spark.sql.ExperimentalMethods
 
extraStrategies() - 类 中的方法org.apache.spark.sql.ExperimentalMethods
Allows extra strategies to be injected into the query planner at runtime.
eye(int) - 类 中的静态方法org.apache.spark.ml.linalg.DenseMatrix
Generate an Identity Matrix in DenseMatrix format.
eye(int) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Generate a dense Identity Matrix in Matrix format.
eye(int) - 类 中的静态方法org.apache.spark.mllib.linalg.DenseMatrix
Generate an Identity Matrix in DenseMatrix format.
eye(int) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Generate a dense Identity Matrix in Matrix format.

F

f1Measure() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns document-based f1-measure averaged by the number of documents
f1Measure(double) - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns f1-measure for a given label (category)
factorial(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the factorial of the given value.
failed() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
FAILED() - 类 中的静态方法org.apache.spark.TaskState
 
failedStages() - 类 中的方法org.apache.spark.status.LiveJob
 
failedTasks() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
failedTasks() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
failedTasks() - 类 中的方法org.apache.spark.status.LiveExecutor
 
failedTasks() - 类 中的方法org.apache.spark.status.LiveExecutorStageSummary
 
failedTasks() - 类 中的方法org.apache.spark.status.LiveJob
 
failedTasks() - 类 中的方法org.apache.spark.status.LiveStage
 
Failure() - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
failure(String) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
failureReason() - 类 中的方法org.apache.spark.scheduler.StageInfo
If the stage failed, the reason why.
failureReason() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
failureReason() - 类 中的方法org.apache.spark.status.api.v1.streaming.OutputOperationInfo
 
failureReason() - 类 中的方法org.apache.spark.streaming.scheduler.OutputOperationInfo
 
failureReasonCell(String, int, boolean) - 类 中的静态方法org.apache.spark.streaming.ui.UIUtils
 
FAIR() - 类 中的静态方法org.apache.spark.scheduler.SchedulingMode
 
FAKE_HIVE_VERSION() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
FalsePositiveRate - org.apache.spark.mllib.evaluation.binary中的类
False positive rate.
FalsePositiveRate() - 类 的构造器org.apache.spark.mllib.evaluation.binary.FalsePositiveRate
 
falsePositiveRate(double) - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns false positive rate for a given label (category)
falsePositiveRateByLabel() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns false positive rate for each label (category).
family() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
family() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
family() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
Param for the name of family which is a description of the label distribution to be used in the model.
family() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
family() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
Param for the name of family which is a description of the error distribution to be used in the model.
family() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
Family$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Family$
 
FamilyAndLink$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
 
FAST_IN_PROGRESS_PARSING() - 类 中的静态方法org.apache.spark.internal.config.History
 
fdr() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
fdr() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
fdr() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
The upper bound of the expected false discovery rate.
fdr() - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
feature() - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
 
feature() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
feature() - 类 中的方法org.apache.spark.mllib.tree.model.Split
 
FeatureHasher - org.apache.spark.ml.feature中的类
Feature hashing projects a set of categorical or numerical features into a feature vector of specified dimension (typically substantially smaller than that of the original feature space).
FeatureHasher(String) - 类 的构造器org.apache.spark.ml.feature.FeatureHasher
 
FeatureHasher() - 类 的构造器org.apache.spark.ml.feature.FeatureHasher
 
featureImportances() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
featureImportances() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
featureImportances() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
featureImportances() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
featureImportances() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
featureImportances() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
featureIndex() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
featureIndex() - 接口 中的方法org.apache.spark.ml.regression.IsotonicRegressionBase
Param for the index of the feature if featuresCol is a vector column (default: 0), no effect otherwise.
featureIndex() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
featureIndex() - 类 中的方法org.apache.spark.ml.tree.CategoricalSplit
 
featureIndex() - 类 中的方法org.apache.spark.ml.tree.ContinuousSplit
 
featureIndex() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
 
featureIndex() - 接口 中的方法org.apache.spark.ml.tree.Split
Index of feature which this split tests
features() - 类 中的方法org.apache.spark.ml.feature.LabeledPoint
 
features() - 类 中的方法org.apache.spark.mllib.regression.LabeledPoint
 
featuresCol() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Field in "predictions" which gives the features of each instance as a vector.
featuresCol() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
 
featuresCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
featuresCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
featuresCol() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
featuresCol() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
featuresCol() - 类 中的方法org.apache.spark.ml.clustering.ClusteringSummary
 
featuresCol() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
featuresCol() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
featuresCol() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
featuresCol() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
featuresCol() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
featuresCol() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
featuresCol() - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
featuresCol() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
featuresCol() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
featuresCol() - 类 中的方法org.apache.spark.ml.feature.RFormula
 
featuresCol() - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
featuresCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasFeaturesCol
Param for features column name.
featuresCol() - 类 中的方法org.apache.spark.ml.PredictionModel
 
featuresCol() - 类 中的方法org.apache.spark.ml.Predictor
 
featuresCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
featuresCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
featuresCol() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
featuresCol() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
featuresCol() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
featureSubsetStrategy() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
featureSubsetStrategy() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
featureSubsetStrategy() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
featureSubsetStrategy() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
featureSubsetStrategy() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
featureSubsetStrategy() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
featureSubsetStrategy() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
featureSubsetStrategy() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
featureSubsetStrategy() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleParams
The number of features to consider for splits at each tree node.
featureSum() - 类 中的方法org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
 
FeatureType - org.apache.spark.mllib.tree.configuration中的类
Enum to describe whether a feature is "continuous" or "categorical"
FeatureType() - 类 的构造器org.apache.spark.mllib.tree.configuration.FeatureType
 
featureType() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
featureType() - 类 中的方法org.apache.spark.mllib.tree.model.Split
 
FETCH_WAIT_TIME() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleRead$
 
FetchFailed - org.apache.spark中的类
:: DeveloperApi :: Task failed to fetch shuffle data from a remote node.
FetchFailed(BlockManagerId, int, long, int, int, String) - 类 的构造器org.apache.spark.FetchFailed
 
fetchFile(String, File, SparkConf, org.apache.spark.SecurityManager, Configuration, long, boolean) - 类 中的静态方法org.apache.spark.util.Utils
Download a file or directory to target directory.
fetchPct() - 类 中的方法org.apache.spark.scheduler.RuntimePercentage
 
fetchWaitTime() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
 
fetchWaitTime() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetrics
 
field() - 类 中的方法org.apache.spark.storage.BroadcastBlockId
 
fieldIndex(String) - 接口 中的方法org.apache.spark.sql.Row
Returns the index of a given field name.
fieldIndex(String) - 类 中的方法org.apache.spark.sql.types.StructType
Returns the index of a given field.
fieldNames() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.AddColumn
 
fieldNames() - 接口 中的方法org.apache.spark.sql.connector.catalog.TableChange.ColumnChange
 
fieldNames() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.DeleteColumn
 
fieldNames() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.RenameColumn
 
fieldNames() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnComment
 
fieldNames() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType
 
fieldNames() - 接口 中的方法org.apache.spark.sql.connector.expressions.NamedReference
Returns the referenced field name as an array of String parts.
fieldNames() - 类 中的方法org.apache.spark.sql.types.StructType
Returns all field names in an array.
fields() - 类 中的方法org.apache.spark.sql.types.StructType
 
FIFO() - 类 中的静态方法org.apache.spark.scheduler.SchedulingMode
 
FILE_FORMAT() - 类 中的静态方法org.apache.spark.sql.hive.execution.HiveOptions
 
FileBasedTopologyMapper - org.apache.spark.storage中的类
A simple file based topology mapper.
FileBasedTopologyMapper(SparkConf) - 类 的构造器org.apache.spark.storage.FileBasedTopologyMapper
 
FileCommitProtocol - org.apache.spark.internal.io中的类
An interface to define how a single Spark job commits its outputs.
FileCommitProtocol() - 类 的构造器org.apache.spark.internal.io.FileCommitProtocol
 
FileCommitProtocol.EmptyTaskCommitMessage$ - org.apache.spark.internal.io中的类
 
FileCommitProtocol.TaskCommitMessage - org.apache.spark.internal.io中的类
 
fileFormat() - 类 中的方法org.apache.spark.sql.hive.execution.HiveOptions
 
files() - 类 中的方法org.apache.spark.SparkContext
 
fileStream(String, Class<K>, Class<V>, Class<F>) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, Class<K>, Class<V>, Class<F>, Function<Path, Boolean>, boolean) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, Class<K>, Class<V>, Class<F>, Function<Path, Boolean>, boolean, Configuration) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, ClassTag<K>, ClassTag<V>, ClassTag<F>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, Function1<Path, Object>, boolean, ClassTag<K>, ClassTag<V>, ClassTag<F>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, Function1<Path, Object>, boolean, Configuration, ClassTag<K>, ClassTag<V>, ClassTag<F>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fill(long) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null or NaN values in numeric columns with value.
fill(double) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null or NaN values in numeric columns with value.
fill(String) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values in string columns with value.
fill(long, String[]) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null or NaN values in specified numeric columns.
fill(double, String[]) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null or NaN values in specified numeric columns.
fill(long, Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that replaces null or NaN values in specified numeric columns.
fill(double, Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that replaces null or NaN values in specified numeric columns.
fill(String, String[]) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values in specified string columns.
fill(String, Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that replaces null values in specified string columns.
fill(boolean) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values in boolean columns with value.
fill(boolean, Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that replaces null values in specified boolean columns.
fill(boolean, String[]) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values in specified boolean columns.
fill(Map<String, Object>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values.
fill(Map<String, Object>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that replaces null values.
filter(Function<Double, Boolean>) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD containing only the elements that satisfy a predicate.
filter(Function<Tuple2<K, V>, Boolean>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a new RDD containing only the elements that satisfy a predicate.
filter(Function<T, Boolean>) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return a new RDD containing only the elements that satisfy a predicate.
filter(Function1<Graph<VD, ED>, Graph<VD2, ED2>>, Function1<EdgeTriplet<VD2, ED2>, Object>, Function2<Object, VD2, Object>, ClassTag<VD2>, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.GraphOps
Filter the graph by computing some values to filter on, and applying the predicates.
filter(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
filter(Function1<Tuple2<Object, VD>, Object>) - 类 中的方法org.apache.spark.graphx.VertexRDD
Restricts the vertex set to the set of vertices satisfying the given predicate.
filter(Params) - 类 中的方法org.apache.spark.ml.param.ParamMap
Filters this param map for the given parent.
filter(Function1<T, Object>) - 类 中的方法org.apache.spark.rdd.RDD
Return a new RDD containing only the elements that satisfy a predicate.
filter(Column) - 类 中的方法org.apache.spark.sql.Dataset
Filters rows using the given condition.
filter(String) - 类 中的方法org.apache.spark.sql.Dataset
Filters rows using the given SQL expression.
filter(Function1<T, Object>) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Returns a new Dataset that only contains elements where func returns true.
filter(FilterFunction<T>) - 类 中的方法org.apache.spark.sql.Dataset
(Java-specific) Returns a new Dataset that only contains elements where func returns true.
filter(Column, Function1<Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns an array of elements for which a predicate holds in a given array.
filter(Column, Function2<Column, Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns an array of elements for which a predicate holds in a given array.
Filter - org.apache.spark.sql.sources中的类
A filter predicate for data sources.
Filter() - 类 的构造器org.apache.spark.sql.sources.Filter
 
filter() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
 
filter(Function<T, Boolean>) - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
Return a new DStream containing only the elements that satisfy a predicate.
filter(Function<Tuple2<K, V>, Boolean>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream containing only the elements that satisfy a predicate.
filter(Function1<T, Object>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream containing only the elements that satisfy a predicate.
filterByRange(K, K) - 类 中的方法org.apache.spark.rdd.OrderedRDDFunctions
Returns an RDD containing only the elements in the inclusive range lower to upper.
FilterFunction<T> - org.apache.spark.api.java.function中的接口
Base interface for a function used in Dataset's filter function.
filterName() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
 
filterParams() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
 
finalStorageLevel() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
finalStorageLevel() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Param for StorageLevel for ALS model factors.
findClass(String) - 类 中的方法org.apache.spark.util.ParentClassLoader
 
findColumnPosition(Seq<String>, StructType, Function2<String, String, Object>) - 类 中的静态方法org.apache.spark.sql.util.SchemaUtils
Returns the given column's ordinal within the given schema.
findExpressionAndTrackLineageDown(Expression, LogicalPlan) - 类 中的静态方法org.apache.spark.sql.dynamicpruning.CleanupDynamicPruningFilters
 
findExpressionAndTrackLineageDown(Expression, LogicalPlan) - 类 中的静态方法org.apache.spark.sql.dynamicpruning.PartitionPruning
 
findFrequentSequentialPatterns(Dataset<?>) - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
Finds the complete set of frequent sequential patterns in the input sequences of itemsets.
findListenersByClass(ClassTag<T>) - 接口 中的方法org.apache.spark.util.ListenerBus
 
findMatchingTokenClusterConfig(SparkConf, String) - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenUtil
 
findMissingPartitions() - 类 中的方法org.apache.spark.ShuffleStatus
Returns the sequence of partition ids that are missing (i.e. needs to be computed).
findSynonyms(String, int) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
Find "num" number of words closest in similarity to the given word, not including the word itself.
findSynonyms(Vector, int) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
Find "num" number of words whose vector representation is most similar to the supplied vector.
findSynonyms(String, int) - 类 中的方法org.apache.spark.mllib.feature.Word2VecModel
Find synonyms of a word; do not include the word itself in results.
findSynonyms(Vector, int) - 类 中的方法org.apache.spark.mllib.feature.Word2VecModel
Find synonyms of the vector representation of a word, possibly including any words in the model vocabulary whose vector respresentation is the supplied vector.
findSynonymsArray(Vector, int) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
Find "num" number of words whose vector representation is most similar to the supplied vector.
findSynonymsArray(String, int) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
Find "num" number of words closest in similarity to the given word, not including the word itself.
finish(OpenHashMap<String, Object>[]) - 类 中的方法org.apache.spark.ml.feature.StringIndexerAggregator
 
finish(BUF) - 类 中的方法org.apache.spark.sql.expressions.Aggregator
Transform the output of the reduction.
finished() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
FINISHED() - 类 中的静态方法org.apache.spark.TaskState
 
finishTime() - 类 中的方法org.apache.spark.scheduler.TaskInfo
The time when the task has completed successfully (including the time to remotely fetch results, if necessary).
first() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
 
first() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
 
first() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return the first element in this RDD.
first() - 类 中的方法org.apache.spark.rdd.RDD
Return the first element in this RDD.
first() - 类 中的方法org.apache.spark.sql.Dataset
Returns the first row.
first(Column, boolean) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the first value in a group.
first(String, boolean) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the first value of a column in a group.
first(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the first value in a group.
first(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the first value of a column in a group.
firstFailureReason() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
firstLaunchTime() - 类 中的方法org.apache.spark.status.LiveStage
 
firstTaskLaunchedTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - 类 中的方法org.apache.spark.ml.Estimator
Fits a single model to the input data with optional parameters.
fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - 类 中的方法org.apache.spark.ml.Estimator
Fits a single model to the input data with optional parameters.
fit(Dataset<?>, ParamMap) - 类 中的方法org.apache.spark.ml.Estimator
Fits a single model to the input data with provided parameter map.
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.Estimator
Fits a model to the input data.
fit(Dataset<?>, ParamMap[]) - 类 中的方法org.apache.spark.ml.Estimator
Fits multiple models to the input data with multiple sets of parameters.
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.IDF
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.Imputer
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScaler
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.PCA
Computes a PCAModel that contains the principal components of the input vectors.
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.RFormula
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.Pipeline
Fits the pipeline to the input dataset with additional parameters.
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.Predictor
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
fit(Dataset<?>) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
fit(RDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
Returns a ChiSquared feature selector.
fit(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.feature.IDF
Computes the inverse document frequency.
fit(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.feature.IDF
Computes the inverse document frequency.
fit(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.feature.PCA
Computes a PCAModel that contains the principal components of the input vectors.
fit(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.feature.PCA
Java-friendly version of fit().
fit(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.feature.StandardScaler
Computes the mean and variance and stores as a model to be used for later scaling.
fit(RDD<S>) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Computes the vector representation of each word in vocabulary.
fit(JavaRDD<S>) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Computes the vector representation of each word in vocabulary (Java version).
FitEnd<M extends Model<M>> - org.apache.spark.ml中的类
Event fired after Estimator.fit.
FitEnd() - 类 的构造器org.apache.spark.ml.FitEnd
 
fitIntercept() - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
fitIntercept() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
fitIntercept() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
fitIntercept() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
fitIntercept() - 接口 中的方法org.apache.spark.ml.param.shared.HasFitIntercept
Param for whether to fit an intercept term.
fitIntercept() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
fitIntercept() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
fitIntercept() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
fitIntercept() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
fitIntercept() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
fitIntercept() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
FitStart<M extends Model<M>> - org.apache.spark.ml中的类
Event fired before Estimator.fit.
FitStart() - 类 的构造器org.apache.spark.ml.FitStart
 
Fixed$() - 类 的构造器org.apache.spark.sql.types.DecimalType.Fixed$
 
flatMap(FlatMapFunction<T, U>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.RDD
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
flatMap(Function1<T, TraversableOnce<U>>, Encoder<U>) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Returns a new Dataset by first applying a function to all elements of this Dataset, and then flattening the results.
flatMap(FlatMapFunction<T, U>, Encoder<U>) - 类 中的方法org.apache.spark.sql.Dataset
(Java-specific) Returns a new Dataset by first applying a function to all elements of this Dataset, and then flattening the results.
flatMap(FlatMapFunction<T, U>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream by applying a function to all elements of this DStream, and then flattening the results
flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream by applying a function to all elements of this DStream, and then flattening the results
FlatMapFunction<T,R> - org.apache.spark.api.java.function中的接口
A function that returns zero or more output records from each input record.
FlatMapFunction2<T1,T2,R> - org.apache.spark.api.java.function中的接口
A function that takes two inputs and returns zero or more output records.
flatMapGroups(Function2<K, Iterator<V>, TraversableOnce<U>>, Encoder<U>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Scala-specific) Applies the given function to each group of data.
flatMapGroups(FlatMapGroupsFunction<K, V, U>, Encoder<U>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Java-specific) Applies the given function to each group of data.
FlatMapGroupsFunction<K,V,R> - org.apache.spark.api.java.function中的接口
A function that returns zero or more output records from each grouping key and its values.
flatMapGroupsWithState(OutputMode, GroupStateTimeout, Function3<K, Iterator<V>, GroupState<S>, Iterator<U>>, Encoder<S>, Encoder<U>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
flatMapGroupsWithState(FlatMapGroupsWithStateFunction<K, V, S, U>, OutputMode, Encoder<S>, Encoder<U>, GroupStateTimeout) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Java-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
FlatMapGroupsWithStateFunction<K,V,S,R> - org.apache.spark.api.java.function中的接口
::Experimental:: Base interface for a map function used in org.apache.spark.sql.KeyValueGroupedDataset.flatMapGroupsWithState( FlatMapGroupsWithStateFunction, org.apache.spark.sql.streaming.OutputMode, org.apache.spark.sql.Encoder, org.apache.spark.sql.Encoder)
flatMapToDouble(DoubleFlatMapFunction<T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
flatMapToPair(PairFlatMapFunction<T, K2, V2>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
flatMapToPair(PairFlatMapFunction<T, K2, V2>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream by applying a function to all elements of this DStream, and then flattening the results
flatMapValues(FlatMapFunction<V, U>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.
flatMapValues(Function1<V, TraversableOnce<U>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.
flatMapValues(FlatMapFunction<V, U>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.
flatMapValues(Function1<V, TraversableOnce<U>>, ClassTag<U>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.
flatten(Column) - 类 中的静态方法org.apache.spark.sql.functions
Creates a single array from an array of arrays.
FLOAT() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable float type.
FloatExactNumeric - org.apache.spark.sql.types中的类
 
FloatExactNumeric() - 类 的构造器org.apache.spark.sql.types.FloatExactNumeric
 
FloatParam - org.apache.spark.ml.param中的类
:: DeveloperApi :: Specialized version of Param[Float] for Java.
FloatParam(String, String, String, Function1<Object, Object>) - 类 的构造器org.apache.spark.ml.param.FloatParam
 
FloatParam(String, String, String) - 类 的构造器org.apache.spark.ml.param.FloatParam
 
FloatParam(Identifiable, String, String, Function1<Object, Object>) - 类 的构造器org.apache.spark.ml.param.FloatParam
 
FloatParam(Identifiable, String, String) - 类 的构造器org.apache.spark.ml.param.FloatParam
 
FloatType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the FloatType object.
FloatType - org.apache.spark.sql.types中的类
The data type representing Float values.
FloatType() - 类 的构造器org.apache.spark.sql.types.FloatType
 
floor(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the floor of the given value.
floor(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the floor of the given column.
floor() - 类 中的方法org.apache.spark.sql.types.Decimal
 
floor(Duration) - 类 中的方法org.apache.spark.streaming.Time
 
floor(Duration, Time) - 类 中的方法org.apache.spark.streaming.Time
 
flush() - 类 中的方法org.apache.spark.serializer.SerializationStream
 
flush() - 类 中的方法org.apache.spark.storage.TimeTrackingOutputStream
 
fMeasure(double, double) - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns f-measure for a given label (category)
fMeasure(double) - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns f1-measure for a given label (category)
fMeasureByLabel(double) - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns f-measure for each label (category).
fMeasureByLabel() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns f1-measure for each label (category).
fMeasureByThreshold() - 接口 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
Returns a dataframe with two fields (threshold, F-Measure) curve with beta = 1.0.
fMeasureByThreshold() - 类 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
 
fMeasureByThreshold(double) - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Returns the (threshold, F-Measure) curve.
fMeasureByThreshold() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Returns the (threshold, F-Measure) curve with beta = 1.0.
fold(T, Function2<T, T, T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value".
fold(T, Function2<T, T, T>) - 类 中的方法org.apache.spark.rdd.RDD
Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value".
foldByKey(V, Partitioner, Function2<V, V, V>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, int, Function2<V, V, V>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, Function2<V, V, V>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, Partitioner, Function2<V, V, V>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, int, Function2<V, V, V>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, Function2<V, V, V>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
forall(Column, Function1<Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns whether a predicate holds for every element in the array.
forceIndexLabel() - 类 中的方法org.apache.spark.ml.feature.RFormula
 
forceIndexLabel() - 接口 中的方法org.apache.spark.ml.feature.RFormulaBase
Force to index label whether it is numeric or string type.
forceIndexLabel() - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
foreach(VoidFunction<T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Applies a function f to all elements of this RDD.
foreach(Function1<T, BoxedUnit>) - 类 中的方法org.apache.spark.rdd.RDD
Applies a function f to all elements of this RDD.
foreach(Function1<T, BoxedUnit>) - 类 中的方法org.apache.spark.sql.Dataset
Applies a function f to all rows.
foreach(ForeachFunction<T>) - 类 中的方法org.apache.spark.sql.Dataset
(Java-specific) Runs func on each element of this Dataset.
foreach(ForeachWriter<T>) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Sets the output of the streaming query to be processed using the provided writer object.
foreachActive(Function3<Object, Object, Object, BoxedUnit>) - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
foreachActive(Function2<Object, Object, BoxedUnit>) - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
foreachActive(Function3<Object, Object, Object, BoxedUnit>) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Applies a function f to all the active elements of dense and sparse matrix.
foreachActive(Function3<Object, Object, Object, BoxedUnit>) - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
foreachActive(Function2<Object, Object, BoxedUnit>) - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
foreachActive(Function2<Object, Object, BoxedUnit>) - 接口 中的方法org.apache.spark.ml.linalg.Vector
Applies a function f to all the active elements of dense and sparse vector.
foreachActive(Function2<Object, Object, BoxedUnit>) - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
foreachActive(Function3<Object, Object, Object, BoxedUnit>) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Applies a function f to all the active elements of dense and sparse matrix.
foreachActive(Function2<Object, Object, BoxedUnit>) - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
foreachActive(Function2<Object, Object, BoxedUnit>) - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Applies a function f to all the active elements of dense and sparse vector.
foreachAsync(VoidFunction<T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
The asynchronous version of the foreach action, which applies a function f to all the elements of this RDD.
foreachAsync(Function1<T, BoxedUnit>) - 类 中的方法org.apache.spark.rdd.AsyncRDDActions
Applies a function f to all elements of this RDD.
foreachBatch(Function2<Dataset<T>, Object, BoxedUnit>) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
:: Experimental :: (Scala-specific) Sets the output of the streaming query to be processed using the provided function.
foreachBatch(VoidFunction2<Dataset<T>, Long>) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
:: Experimental :: (Java-specific) Sets the output of the streaming query to be processed using the provided function.
ForeachFunction<T> - org.apache.spark.api.java.function中的接口
Base interface for a function used in Dataset's foreach function.
foreachPartition(VoidFunction<Iterator<T>>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Applies a function f to each partition of this RDD.
foreachPartition(Function1<Iterator<T>, BoxedUnit>) - 类 中的方法org.apache.spark.rdd.RDD
Applies a function f to each partition of this RDD.
foreachPartition(Function1<Iterator<T>, BoxedUnit>) - 类 中的方法org.apache.spark.sql.Dataset
Applies a function f to each partition of this Dataset.
foreachPartition(ForeachPartitionFunction<T>) - 类 中的方法org.apache.spark.sql.Dataset
(Java-specific) Runs func on each partition of this Dataset.
foreachPartitionAsync(VoidFunction<Iterator<T>>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
The asynchronous version of the foreachPartition action, which applies a function f to each partition of this RDD.
foreachPartitionAsync(Function1<Iterator<T>, BoxedUnit>) - 类 中的方法org.apache.spark.rdd.AsyncRDDActions
Applies a function f to each partition of this RDD.
ForeachPartitionFunction<T> - org.apache.spark.api.java.function中的接口
Base interface for a function used in Dataset's foreachPartition function.
foreachRDD(VoidFunction<R>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Apply a function to each RDD in this DStream.
foreachRDD(VoidFunction2<R, Time>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Apply a function to each RDD in this DStream.
foreachRDD(Function1<RDD<T>, BoxedUnit>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Apply a function to each RDD in this DStream.
foreachRDD(Function2<RDD<T>, Time, BoxedUnit>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Apply a function to each RDD in this DStream.
ForeachWriter<T> - org.apache.spark.sql中的类
The abstract class for writing custom logic to process data generated by a query.
ForeachWriter() - 类 的构造器org.apache.spark.sql.ForeachWriter
 
format() - 类 中的方法org.apache.spark.ml.clustering.InternalKMeansModelWriter
 
format() - 类 中的方法org.apache.spark.ml.clustering.PMMLKMeansModelWriter
 
format() - 类 中的方法org.apache.spark.ml.regression.InternalLinearRegressionModelWriter
 
format() - 类 中的方法org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter
 
format(String) - 类 中的方法org.apache.spark.ml.util.GeneralMLWriter
Specifies the format of ML export (e.g.
format() - 接口 中的方法org.apache.spark.ml.util.MLFormatRegister
The string that represents the format that this format provider uses.
format(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Specifies the input data source format.
format(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Specifies the underlying output data source.
format(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Specifies the input data source format.
format(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Specifies the underlying output data source.
format_number(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Formats numeric column x to a format like '#,###,###.##', rounded to d decimal places with HALF_EVEN round mode, and returns the result as a string column.
format_string(String, Column...) - 类 中的静态方法org.apache.spark.sql.functions
Formats the arguments in printf-style and returns the result as a string column.
format_string(String, Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Formats the arguments in printf-style and returns the result as a string column.
formatBatchTime(long, long, boolean, TimeZone) - 类 中的静态方法org.apache.spark.streaming.ui.UIUtils
If batchInterval is less than 1 second, format batchTime with milliseconds.
formatDate(Date) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
formatDate(long) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
formatDuration(long) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
formatDurationVerbose(long) - 类 中的静态方法org.apache.spark.ui.UIUtils
Generate a verbose human-readable string representing a duration such as "5 second 35 ms"
formatNumber(double) - 类 中的静态方法org.apache.spark.ui.UIUtils
Generate a human-readable string representing a number (e.g. 100 K)
formula() - 类 中的方法org.apache.spark.ml.feature.RFormula
 
formula() - 接口 中的方法org.apache.spark.ml.feature.RFormulaBase
R formula parameter.
formula() - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
forward(DenseMatrix<Object>, boolean) - 接口 中的方法org.apache.spark.ml.ann.TopologyModel
Forward propagation
FPGA() - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
FPGrowth - org.apache.spark.ml.fpm中的类
A parallel FP-growth algorithm to mine frequent itemsets.
FPGrowth(String) - 类 的构造器org.apache.spark.ml.fpm.FPGrowth
 
FPGrowth() - 类 的构造器org.apache.spark.ml.fpm.FPGrowth
 
FPGrowth - org.apache.spark.mllib.fpm中的类
A parallel FP-growth algorithm to mine frequent itemsets.
FPGrowth() - 类 的构造器org.apache.spark.mllib.fpm.FPGrowth
Constructs a default instance with default parameters {minSupport: 0.3, numPartitions: same as the input data}.
FPGrowth.FreqItemset<Item> - org.apache.spark.mllib.fpm中的类
Frequent itemset.
FPGrowthModel - org.apache.spark.ml.fpm中的类
Model fitted by FPGrowth.
FPGrowthModel<Item> - org.apache.spark.mllib.fpm中的类
Model trained by FPGrowth, which holds frequent itemsets.
FPGrowthModel(RDD<FPGrowth.FreqItemset<Item>>, Map<Item, Object>, ClassTag<Item>) - 类 的构造器org.apache.spark.mllib.fpm.FPGrowthModel
 
FPGrowthModel(RDD<FPGrowth.FreqItemset<Item>>, ClassTag<Item>) - 类 的构造器org.apache.spark.mllib.fpm.FPGrowthModel
 
FPGrowthModel.SaveLoadV1_0$ - org.apache.spark.mllib.fpm中的类
 
FPGrowthParams - org.apache.spark.ml.fpm中的接口
Common params for FPGrowth and FPGrowthModel
fpr() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
fpr() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
fpr() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
The highest p-value for features to be kept.
fpr() - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
FRACTION_CACHED() - 类 中的静态方法org.apache.spark.ui.storage.ToolTips
 
freq() - 类 中的方法org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
 
freq() - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
 
freqItems(String[], double) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Finding frequent items for columns, possibly with false positives.
freqItems(String[]) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Finding frequent items for columns, possibly with false positives.
freqItems(Seq<String>, double) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
(Scala-specific) Finding frequent items for columns, possibly with false positives.
freqItems(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
(Scala-specific) Finding frequent items for columns, possibly with false positives.
FreqItemset(Object, long) - 类 的构造器org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
 
freqItemsets() - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
freqItemsets() - 类 中的方法org.apache.spark.mllib.fpm.FPGrowthModel
 
FreqSequence(Object[], long) - 类 的构造器org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
 
freqSequences() - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpanModel
 
from_csv(Column, StructType, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
Parses a column containing a CSV string into a StructType with the specified schema.
from_csv(Column, Column, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Java-specific) Parses a column containing a CSV string into a StructType with the specified schema.
from_json(Column, StructType, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Scala-specific) Parses a column containing a JSON string into a StructType with the specified schema.
from_json(Column, DataType, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Scala-specific) Parses a column containing a JSON string into a MapType with StringType as keys type, StructType or ArrayType with the specified schema.
from_json(Column, StructType, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Java-specific) Parses a column containing a JSON string into a StructType with the specified schema.
from_json(Column, DataType, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Java-specific) Parses a column containing a JSON string into a MapType with StringType as keys type, StructType or ArrayType with the specified schema.
from_json(Column, StructType) - 类 中的静态方法org.apache.spark.sql.functions
Parses a column containing a JSON string into a StructType with the specified schema.
from_json(Column, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Parses a column containing a JSON string into a MapType with StringType as keys type, StructType or ArrayType with the specified schema.
from_json(Column, String, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Java-specific) Parses a column containing a JSON string into a MapType with StringType as keys type, StructType or ArrayType with the specified schema.
from_json(Column, String, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Scala-specific) Parses a column containing a JSON string into a MapType with StringType as keys type, StructType or ArrayType with the specified schema.
from_json(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
(Scala-specific) Parses a column containing a JSON string into a MapType with StringType as keys type, StructType or ArrayType of StructTypes with the specified schema.
from_json(Column, Column, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Java-specific) Parses a column containing a JSON string into a MapType with StringType as keys type, StructType or ArrayType of StructTypes with the specified schema.
from_unixtime(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the uuuu-MM-dd HH:mm:ss format.
from_unixtime(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the given format.
from_utc_timestamp(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
已过时。
This function is deprecated and will be removed in future versions. Since 3.0.0.
from_utc_timestamp(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
已过时。
This function is deprecated and will be removed in future versions. Since 3.0.0.
fromArrowField(Field) - 类 中的静态方法org.apache.spark.sql.util.ArrowUtils
 
fromArrowSchema(Schema) - 类 中的静态方法org.apache.spark.sql.util.ArrowUtils
 
fromArrowType(ArrowType) - 类 中的静态方法org.apache.spark.sql.util.ArrowUtils
 
fromCOO(int, int, Iterable<Tuple3<Object, Object, Object>>) - 类 中的静态方法org.apache.spark.ml.linalg.SparseMatrix
Generate a SparseMatrix from Coordinate List (COO) format.
fromCOO(int, int, Iterable<Tuple3<Object, Object, Object>>) - 类 中的静态方法org.apache.spark.mllib.linalg.SparseMatrix
Generate a SparseMatrix from Coordinate List (COO) format.
fromDDL(String) - 类 中的静态方法org.apache.spark.sql.types.DataType
 
fromDDL(String) - 类 中的静态方法org.apache.spark.sql.types.StructType
Creates StructType for a given DDL-formatted string, which is a comma separated list of field definitions, e.g., a INT, b STRING.
fromDecimal(Object) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
fromDStream(DStream<T>, ClassTag<T>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaDStream
Convert a scala DStream to a Java-friendly JavaDStream.
fromEdgePartitions(RDD<Tuple2<Object, EdgePartition<ED, VD>>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.impl.GraphImpl
Create a graph from EdgePartitions, setting referenced vertices to defaultVertexAttr.
fromEdges(RDD<Edge<ED>>, ClassTag<ED>, ClassTag<VD>) - 类 中的静态方法org.apache.spark.graphx.EdgeRDD
Creates an EdgeRDD from a set of edges.
fromEdges(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.Graph
Construct a graph from a collection of edges.
fromEdges(EdgeRDD<?>, int, VD, ClassTag<VD>) - 类 中的静态方法org.apache.spark.graphx.VertexRDD
Constructs a VertexRDD containing all vertices referred to in edges.
fromEdgeTuples(RDD<Tuple2<Object, Object>>, VD, Option<PartitionStrategy>, StorageLevel, StorageLevel, ClassTag<VD>) - 类 中的静态方法org.apache.spark.graphx.Graph
Construct a graph from a collection of edges encoded as vertex id pairs.
fromExistingRDDs(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.impl.GraphImpl
Create a graph from a VertexRDD and an EdgeRDD with the same replicated vertex type as the vertices.
fromInputDStream(InputDStream<T>, ClassTag<T>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaInputDStream
Convert a scala InputDStream to a Java-friendly JavaInputDStream.
fromInputDStream(InputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaPairInputDStream
Convert a scala InputDStream of pairs to a Java-friendly JavaPairInputDStream.
fromInt(int) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
fromInt(int) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
fromInt(int) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
fromInt(int) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
fromInt(int) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
fromInt(int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
fromInt(int) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
fromInt(int) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
fromJavaDStream(JavaDStream<Tuple2<K, V>>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaPairDStream
 
fromJavaRDD(JavaRDD<Tuple2<K, V>>) - 类 中的静态方法org.apache.spark.api.java.JavaPairRDD
Convert a JavaRDD of key-value pairs to JavaPairRDD.
fromJson(String) - 类 中的静态方法org.apache.spark.ml.linalg.JsonMatrixConverter
Parses the JSON representation of a Matrix into a Matrix.
fromJson(String) - 类 中的静态方法org.apache.spark.ml.linalg.JsonVectorConverter
Parses the JSON representation of a vector into a Vector.
fromJson(String) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Parses the JSON representation of a vector into a Vector.
fromJson(String) - 类 中的静态方法org.apache.spark.sql.types.DataType
 
fromJson(String) - 类 中的静态方法org.apache.spark.sql.types.Metadata
Creates a Metadata instance from JSON.
fromKinesisInitialPosition(InitialPositionInStream) - 类 中的静态方法org.apache.spark.streaming.kinesis.KinesisInitialPositions
Returns instance of [[KinesisInitialPosition]] based on the passed [[InitialPositionInStream]].
fromMetadata(Metadata) - 接口 中的方法org.apache.spark.ml.attribute.AttributeFactory
Creates an Attribute from a Metadata instance.
fromML(DenseMatrix) - 类 中的静态方法org.apache.spark.mllib.linalg.DenseMatrix
Convert new linalg type to spark.mllib type.
fromML(DenseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.DenseVector
Convert new linalg type to spark.mllib type.
fromML(Matrix) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Convert new linalg type to spark.mllib type.
fromML(SparseMatrix) - 类 中的静态方法org.apache.spark.mllib.linalg.SparseMatrix
Convert new linalg type to spark.mllib type.
fromML(SparseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.SparseVector
Convert new linalg type to spark.mllib type.
fromML(Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Convert new linalg type to spark.mllib type.
fromName(String) - 类 中的静态方法org.apache.spark.ml.attribute.AttributeType
Gets the AttributeType object from its name.
fromNullable(T) - 类 中的静态方法org.apache.spark.api.java.Optional
 
fromOld(Node, Map<Object, Object>) - 类 中的静态方法org.apache.spark.ml.tree.Node
Create a new Node from the old Node format, recursively creating child nodes as needed.
fromPairDStream(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaPairDStream
 
fromPairRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 中的静态方法org.apache.spark.mllib.rdd.MLPairRDDFunctions
Implicit conversion from a pair RDD to MLPairRDDFunctions.
fromParams(GeneralizedLinearRegressionBase) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Family$
Gets the Family object based on param family and variancePower.
fromParams(GeneralizedLinearRegressionBase) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Link$
Gets the Link object based on param family, link and linkPower.
fromRDD(RDD<Object>) - 类 中的静态方法org.apache.spark.api.java.JavaDoubleRDD
 
fromRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 中的静态方法org.apache.spark.api.java.JavaPairRDD
 
fromRDD(RDD<T>, ClassTag<T>) - 类 中的静态方法org.apache.spark.api.java.JavaRDD
 
fromRDD(RDD<T>, ClassTag<T>) - 类 中的静态方法org.apache.spark.mllib.rdd.RDDFunctions
Implicit conversion from an RDD to RDDFunctions.
fromRdd(RDD<?>) - 类 中的静态方法org.apache.spark.storage.RDDInfo
 
fromReceiverInputDStream(ReceiverInputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
Convert a scala ReceiverInputDStream to a Java-friendly JavaReceiverInputDStream.
fromReceiverInputDStream(ReceiverInputDStream<T>, ClassTag<T>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaReceiverInputDStream
Convert a scala ReceiverInputDStream to a Java-friendly JavaReceiverInputDStream.
fromSparkContext(SparkContext) - 类 中的静态方法org.apache.spark.api.java.JavaSparkContext
 
fromStage(Stage, int, Option<Object>, TaskMetrics, Seq<Seq<TaskLocation>>) - 类 中的静态方法org.apache.spark.scheduler.StageInfo
Construct a StageInfo from a Stage.
fromString(String) - 枚举 中的静态方法org.apache.spark.JobExecutionStatus
 
fromString(String) - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Impurities
 
fromString(String) - 类 中的静态方法org.apache.spark.mllib.tree.loss.Losses
 
fromString(String) - 枚举 中的静态方法org.apache.spark.status.api.v1.ApplicationStatus
 
fromString(String) - 枚举 中的静态方法org.apache.spark.status.api.v1.StageStatus
 
fromString(String) - 枚举 中的静态方法org.apache.spark.status.api.v1.streaming.BatchStatus
 
fromString(String) - 枚举 中的静态方法org.apache.spark.status.api.v1.TaskSorting
 
fromString(String) - 类 中的静态方法org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Return the StorageLevel object with the specified name.
fromStructField(StructField) - 类 中的静态方法org.apache.spark.ml.attribute.Attribute
 
fromStructField(StructField) - 接口 中的方法org.apache.spark.ml.attribute.AttributeFactory
Creates an Attribute from a StructField instance.
fromStructField(StructField) - 类 中的静态方法org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group from a StructField instance.
fromStructField(StructField) - 类 中的静态方法org.apache.spark.ml.attribute.BinaryAttribute
 
fromStructField(StructField) - 类 中的静态方法org.apache.spark.ml.attribute.NominalAttribute
 
fromStructField(StructField) - 类 中的静态方法org.apache.spark.ml.attribute.NumericAttribute
 
fullOuterJoin(JavaPairRDD<K, W>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Perform a full outer join of this and other.
fullOuterJoin(JavaPairRDD<K, W>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Perform a full outer join of this and other.
fullOuterJoin(JavaPairRDD<K, W>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Perform a full outer join of this and other.
fullOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Perform a full outer join of this and other.
fullOuterJoin(RDD<Tuple2<K, W>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Perform a full outer join of this and other.
fullOuterJoin(RDD<Tuple2<K, W>>, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Perform a full outer join of this and other.
fullOuterJoin(JavaPairDStream<K, W>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(JavaPairDStream<K, W>, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(JavaPairDStream<K, W>, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullStackTrace() - 类 中的方法org.apache.spark.ExceptionFailure
 
Function<T1,R> - org.apache.spark.api.java.function中的接口
Base interface for functions whose return types do not create special RDDs.
Function - org.apache.spark.sql.catalog中的类
A user-defined function in Spark, as returned by listFunctions method in Catalog.
Function(String, String, String, String, boolean) - 类 的构造器org.apache.spark.sql.catalog.Function
 
function(Function4<Time, KeyType, Option<ValueType>, State<StateType>, Option<MappedType>>) - 类 中的静态方法org.apache.spark.streaming.StateSpec
Create a StateSpec for setting all the specifications of the mapWithState operation on a pair DStream.
function(Function3<KeyType, Option<ValueType>, State<StateType>, MappedType>) - 类 中的静态方法org.apache.spark.streaming.StateSpec
Create a StateSpec for setting all the specifications of the mapWithState operation on a pair DStream.
function(Function4<Time, KeyType, Optional<ValueType>, State<StateType>, Optional<MappedType>>) - 类 中的静态方法org.apache.spark.streaming.StateSpec
Create a StateSpec for setting all the specifications of the mapWithState operation on a JavaPairDStream.
function(Function3<KeyType, Optional<ValueType>, State<StateType>, MappedType>) - 类 中的静态方法org.apache.spark.streaming.StateSpec
Create a StateSpec for setting all the specifications of the mapWithState operation on a JavaPairDStream.
Function0<R> - org.apache.spark.api.java.function中的接口
A zero-argument function that returns an R.
Function2<T1,T2,R> - org.apache.spark.api.java.function中的接口
A two-argument function that takes arguments of type T1 and T2 and returns an R.
Function3<T1,T2,T3,R> - org.apache.spark.api.java.function中的接口
A three-argument function that takes arguments of type T1, T2 and T3 and returns an R.
Function4<T1,T2,T3,T4,R> - org.apache.spark.api.java.function中的接口
A four-argument function that takes arguments of type T1, T2, T3 and T4 and returns an R.
functionExists(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Check if the function with the specified name exists.
functionExists(String, String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Check if the function with the specified name exists in the specified database.
functionExists(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Return whether a function exists in the specified database.
functions - org.apache.spark.sql中的类
Commonly used functions available for DataFrame operations.
functions() - 类 的构造器org.apache.spark.sql.functions
 
FutureAction<T> - org.apache.spark中的接口
A future for the result of an action to support cancellation.
futureExecutionContext() - 类 中的静态方法org.apache.spark.rdd.AsyncRDDActions
 
fwe() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
fwe() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
fwe() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
The upper bound of the expected family-wise error rate.
fwe() - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 

G

gain() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
gain() - 类 中的方法org.apache.spark.ml.tree.InternalNode
 
gain() - 类 中的方法org.apache.spark.mllib.tree.model.InformationGainStats
 
Gamma$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
 
gamma1() - 类 中的方法org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
gamma2() - 类 中的方法org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
gamma6() - 类 中的方法org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
gamma7() - 类 中的方法org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
GammaGenerator - org.apache.spark.mllib.random中的类
:: DeveloperApi :: Generates i.i.d. samples from the gamma distribution with the given shape and scale.
GammaGenerator(double, double) - 类 的构造器org.apache.spark.mllib.random.GammaGenerator
 
gammaJavaRDD(JavaSparkContext, double, double, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.gammaRDD.
gammaJavaRDD(JavaSparkContext, double, double, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.gammaJavaRDD with the default seed.
gammaJavaRDD(JavaSparkContext, double, double, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.gammaJavaRDD with the default number of partitions and the default seed.
gammaJavaVectorRDD(JavaSparkContext, double, double, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.gammaVectorRDD.
gammaJavaVectorRDD(JavaSparkContext, double, double, long, int, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.gammaJavaVectorRDD with the default seed.
gammaJavaVectorRDD(JavaSparkContext, double, double, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.gammaJavaVectorRDD with the default number of partitions and the default seed.
gammaRDD(SparkContext, double, double, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD comprised of i.i.d.
gammaVectorRDD(SparkContext, double, double, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD[Vector] with vectors containing i.i.d.
gapply(RelationalGroupedDataset, byte[], byte[], Object[], StructType) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
The helper function for gapply() on R side.
gaps() - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
Indicates whether regex splits on gaps (true) or matches tokens (false).
GarbageCollectionMetrics - org.apache.spark.metrics中的类
 
GarbageCollectionMetrics() - 类 的构造器org.apache.spark.metrics.GarbageCollectionMetrics
 
GAUGE() - 类 中的静态方法org.apache.spark.metrics.sink.StatsdMetricType
 
Gaussian$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
 
GaussianMixture - org.apache.spark.ml.clustering中的类
Gaussian Mixture clustering.
GaussianMixture(String) - 类 的构造器org.apache.spark.ml.clustering.GaussianMixture
 
GaussianMixture() - 类 的构造器org.apache.spark.ml.clustering.GaussianMixture
 
GaussianMixture - org.apache.spark.mllib.clustering中的类
This class performs expectation maximization for multivariate Gaussian Mixture Models (GMMs).
GaussianMixture() - 类 的构造器org.apache.spark.mllib.clustering.GaussianMixture
Constructs a default instance.
GaussianMixtureModel - org.apache.spark.ml.clustering中的类
Multivariate Gaussian Mixture Model (GMM) consisting of k Gaussians, where points are drawn from each Gaussian i with probability weights(i).
GaussianMixtureModel - org.apache.spark.mllib.clustering中的类
Multivariate Gaussian Mixture Model (GMM) consisting of k Gaussians, where points are drawn from each Gaussian i=1..k with probability w(i); mu(i) and sigma(i) are the respective mean and covariance for each Gaussian distribution i=1..k.
GaussianMixtureModel(double[], MultivariateGaussian[]) - 类 的构造器org.apache.spark.mllib.clustering.GaussianMixtureModel
 
GaussianMixtureParams - org.apache.spark.ml.clustering中的接口
Common params for GaussianMixture and GaussianMixtureModel
GaussianMixtureSummary - org.apache.spark.ml.clustering中的类
Summary of GaussianMixture.
gaussians() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
gaussians() - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixtureModel
 
gaussiansDF() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
Retrieve Gaussian distributions as a DataFrame.
GBTClassificationModel - org.apache.spark.ml.classification中的类
Gradient-Boosted Trees (GBTs) (http://en.wikipedia.org/wiki/Gradient_boosting) model for classification.
GBTClassificationModel(String, DecisionTreeRegressionModel[], double[]) - 类 的构造器org.apache.spark.ml.classification.GBTClassificationModel
Construct a GBTClassificationModel
GBTClassifier - org.apache.spark.ml.classification中的类
Gradient-Boosted Trees (GBTs) (http://en.wikipedia.org/wiki/Gradient_boosting) learning algorithm for classification.
GBTClassifier(String) - 类 的构造器org.apache.spark.ml.classification.GBTClassifier
 
GBTClassifier() - 类 的构造器org.apache.spark.ml.classification.GBTClassifier
 
GBTClassifierParams - org.apache.spark.ml.tree中的接口
 
GBTParams - org.apache.spark.ml.tree中的接口
Parameters for Gradient-Boosted Tree algorithms.
GBTRegressionModel - org.apache.spark.ml.regression中的类
Gradient-Boosted Trees (GBTs) model for regression.
GBTRegressionModel(String, DecisionTreeRegressionModel[], double[]) - 类 的构造器org.apache.spark.ml.regression.GBTRegressionModel
Construct a GBTRegressionModel
GBTRegressor - org.apache.spark.ml.regression中的类
Gradient-Boosted Trees (GBTs) learning algorithm for regression.
GBTRegressor(String) - 类 的构造器org.apache.spark.ml.regression.GBTRegressor
 
GBTRegressor() - 类 的构造器org.apache.spark.ml.regression.GBTRegressor
 
GBTRegressorParams - org.apache.spark.ml.tree中的接口
 
GC_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
GC_TIME() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
gemm(double, Matrix, DenseMatrix, double, DenseMatrix) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
C := alpha * A * B + beta * C
gemm(double, Matrix, DenseMatrix, double, DenseMatrix) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
C := alpha * A * B + beta * C
gemv(double, Matrix, Vector, double, DenseVector) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
y := alpha * A * x + beta * y
gemv(double, Matrix, Vector, double, DenseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
y := alpha * A * x + beta * y
GeneralizedLinearAlgorithm<M extends GeneralizedLinearModel> - org.apache.spark.mllib.regression中的类
:: DeveloperApi :: GeneralizedLinearAlgorithm implements methods to train a Generalized Linear Model (GLM).
GeneralizedLinearAlgorithm() - 类 的构造器org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
 
GeneralizedLinearModel - org.apache.spark.mllib.regression中的类
:: DeveloperApi :: GeneralizedLinearModel (GLM) represents a model trained using GeneralizedLinearAlgorithm.
GeneralizedLinearModel(Vector, double) - 类 的构造器org.apache.spark.mllib.regression.GeneralizedLinearModel
 
GeneralizedLinearRegression - org.apache.spark.ml.regression中的类
Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family).
GeneralizedLinearRegression(String) - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression
 
GeneralizedLinearRegression() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression
 
GeneralizedLinearRegression.Binomial$ - org.apache.spark.ml.regression中的类
Binomial exponential family distribution.
GeneralizedLinearRegression.CLogLog$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.Family$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.FamilyAndLink$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.Gamma$ - org.apache.spark.ml.regression中的类
Gamma exponential family distribution.
GeneralizedLinearRegression.Gaussian$ - org.apache.spark.ml.regression中的类
Gaussian exponential family distribution.
GeneralizedLinearRegression.Identity$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.Inverse$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.Link$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.Log$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.Logit$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.Poisson$ - org.apache.spark.ml.regression中的类
Poisson exponential family distribution.
GeneralizedLinearRegression.Probit$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.Sqrt$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegression.Tweedie$ - org.apache.spark.ml.regression中的类
 
GeneralizedLinearRegressionBase - org.apache.spark.ml.regression中的接口
Params for Generalized Linear Regression.
GeneralizedLinearRegressionModel - org.apache.spark.ml.regression中的类
Model produced by GeneralizedLinearRegression.
GeneralizedLinearRegressionSummary - org.apache.spark.ml.regression中的类
Summary of GeneralizedLinearRegression model and predictions.
GeneralizedLinearRegressionTrainingSummary - org.apache.spark.ml.regression中的类
Summary of GeneralizedLinearRegression fitting and model.
GeneralMLWritable - org.apache.spark.ml.util中的接口
Trait for classes that provide GeneralMLWriter.
GeneralMLWriter - org.apache.spark.ml.util中的类
A ML Writer which delegates based on the requested format.
GeneralMLWriter(PipelineStage) - 类 的构造器org.apache.spark.ml.util.GeneralMLWriter
 
generateAssociationRules(double) - 类 中的方法org.apache.spark.mllib.fpm.FPGrowthModel
Generates association rules for the Items in freqItemsets.
generateKMeansRDD(SparkContext, int, int, int, double, int) - 类 中的静态方法org.apache.spark.mllib.util.KMeansDataGenerator
Generate an RDD containing test data for KMeans.
generateLinearInput(double, double[], int, int, double) - 类 中的静态方法org.apache.spark.mllib.util.LinearDataGenerator
For compatibility, the generated data without specifying the mean and variance will have zero mean and variance of (1.0/3.0) since the original output range is [-1, 1] with uniform distribution, and the variance of uniform distribution is (b - a)^2^ / 12 which will be (1.0/3.0)
generateLinearInput(double, double[], double[], double[], int, int, double) - 类 中的静态方法org.apache.spark.mllib.util.LinearDataGenerator
 
generateLinearInput(double, double[], double[], double[], int, int, double, double) - 类 中的静态方法org.apache.spark.mllib.util.LinearDataGenerator
 
generateLinearInputAsList(double, double[], int, int, double) - 类 中的静态方法org.apache.spark.mllib.util.LinearDataGenerator
Return a Java List of synthetic data randomly generated according to a multi collinear model.
generateLinearRDD(SparkContext, int, int, double, int, double) - 类 中的静态方法org.apache.spark.mllib.util.LinearDataGenerator
Generate an RDD containing sample data for Linear Regression models - including Ridge, Lasso, and unregularized variants.
generateLogisticRDD(SparkContext, int, int, double, int, double) - 类 中的静态方法org.apache.spark.mllib.util.LogisticRegressionDataGenerator
Generate an RDD containing test data for LogisticRegression.
generateRandomEdges(int, int, int, long) - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
 
generateRolledOverFileSuffix() - 接口 中的方法org.apache.spark.util.logging.RollingPolicy
Get the desired name of the rollover file
geq(Object) - 类 中的方法org.apache.spark.sql.Column
Greater than or equal to an expression.
get(Object) - 类 中的方法org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
 
get() - 类 中的方法org.apache.spark.api.java.Optional
 
get() - 类 中的静态方法org.apache.spark.BarrierTaskContext
:: Experimental :: Returns the currently active BarrierTaskContext.
get() - 接口 中的方法org.apache.spark.FutureAction
Blocks and returns the result of this job.
get(String) - 接口 中的方法org.apache.spark.internal.config.ConfigProvider
 
get(Param<T>) - 类 中的方法org.apache.spark.ml.param.ParamMap
Optionally returns the value associated with a param.
get(Param<T>) - 接口 中的方法org.apache.spark.ml.param.Params
Optionally returns the user-supplied value of a param.
get(String) - 类 中的方法org.apache.spark.SparkConf
Get a parameter; throws a NoSuchElementException if it's not set
get(String, String) - 类 中的方法org.apache.spark.SparkConf
Get a parameter, falling back to a default if not set
get() - 类 中的静态方法org.apache.spark.SparkEnv
Returns the SparkEnv.
get(String) - 类 中的静态方法org.apache.spark.SparkFiles
Get the absolute path of a file added through SparkContext.addFile().
get() - 接口 中的方法org.apache.spark.sql.connector.read.PartitionReader
Return the current record.
get(String) - 类 中的静态方法org.apache.spark.sql.jdbc.JdbcDialects
Fetch the JdbcDialect class corresponding to a given database url.
get(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i.
get(String) - 类 中的方法org.apache.spark.sql.RuntimeConfig
Returns the value of Spark runtime configuration property for the given key.
get(String, String) - 类 中的方法org.apache.spark.sql.RuntimeConfig
Returns the value of Spark runtime configuration property for the given key.
get() - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Get the state value if it exists, or throw NoSuchElementException.
get(UUID) - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryManager
Returns the query if there is an active query with the given id, or null.
get(String) - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryManager
Returns the query if there is an active query with the given id, or null.
get(Object) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
get(int, DataType) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
get(int, DataType) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
get() - 类 中的方法org.apache.spark.streaming.State
Get the state if it exists, otherwise it will throw java.util.NoSuchElementException.
get() - 类 中的静态方法org.apache.spark.TaskContext
Return the currently active TaskContext.
get(long) - 类 中的静态方法org.apache.spark.util.AccumulatorContext
Returns the AccumulatorV2 registered with the given ID, if any.
get_json_object(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Extracts json object from a json string based on json path specified, and returns json string of the extracted json object.
getAcceptanceResults(RDD<Tuple2<K, V>>, boolean, Map<K, Object>, Option<Map<K, Object>>, long) - 类 中的静态方法org.apache.spark.util.random.StratifiedSamplingUtils
Count the number of items instantly accepted and generate the waitlist for each stratum.
getActive() - 类 中的静态方法org.apache.spark.streaming.StreamingContext
Get the currently active context, if there is one.
getActiveJobIds() - 类 中的方法org.apache.spark.api.java.JavaSparkStatusTracker
Returns an array containing the ids of all active jobs.
getActiveJobIds() - 类 中的方法org.apache.spark.SparkStatusTracker
Returns an array containing the ids of all active jobs.
getActiveOrCreate(Function0<StreamingContext>) - 类 中的静态方法org.apache.spark.streaming.StreamingContext
Either return the "active" StreamingContext (that is, started but not stopped), or create a new StreamingContext that is
getActiveOrCreate(String, Function0<StreamingContext>, Configuration, boolean) - 类 中的静态方法org.apache.spark.streaming.StreamingContext
Either get the currently active StreamingContext (that is, started but not stopped), OR recreate a StreamingContext from checkpoint data in the given path.
getActiveSession() - 类 中的静态方法org.apache.spark.sql.SparkSession
Returns the active SparkSession for the current thread, returned by the builder.
getActiveStageIds() - 类 中的方法org.apache.spark.api.java.JavaSparkStatusTracker
Returns an array containing the ids of all active stages.
getActiveStageIds() - 类 中的方法org.apache.spark.SparkStatusTracker
Returns an array containing the ids of all active stages.
getAggregationDepth() - 接口 中的方法org.apache.spark.ml.param.shared.HasAggregationDepth
 
getAlgo() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getAll() - 类 中的方法org.apache.spark.SparkConf
Get all parameters as a list of pairs
getAll() - 类 中的方法org.apache.spark.sql.RuntimeConfig
Returns all properties set in this conf.
getAllClusterConfigs(SparkConf) - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenSparkConf
 
getAllConfs() - 类 中的方法org.apache.spark.sql.SQLContext
Return all the configuration properties that have been set (i.e. not the default).
getAllPools() - 类 中的方法org.apache.spark.SparkContext
:: DeveloperApi :: Return pools for fair scheduler
GetAllReceiverInfo - org.apache.spark.streaming.scheduler中的类
 
GetAllReceiverInfo() - 类 的构造器org.apache.spark.streaming.scheduler.GetAllReceiverInfo
 
getAllWithPrefix(String) - 类 中的方法org.apache.spark.SparkConf
Get all parameters that start with prefix
getAlpha() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
 
getAlpha() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Alias for getDocConcentration
getAnyValAs(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i.
getAppId() - 接口 中的方法org.apache.spark.launcher.SparkAppHandle
Returns the application ID, or null if not yet known.
getAppId() - 类 中的方法org.apache.spark.SparkConf
Returns the Spark application id, valid in the Driver after TaskScheduler registration and from the start in the Executor.
getApplicationInfo(String) - 接口 中的方法org.apache.spark.status.api.v1.UIRoot
 
getApplicationInfoList() - 接口 中的方法org.apache.spark.status.api.v1.UIRoot
 
getArray(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getArray(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getArray(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getArray(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the array type value for rowId.
getAs(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i.
getAs(String) - 接口 中的方法org.apache.spark.sql.Row
Returns the value of a given fieldName.
getAssociationRulesFromFP(Dataset<?>, String, String, double, Map<T, Object>, ClassTag<T>) - 类 中的静态方法org.apache.spark.ml.fpm.AssociationRules
Computes the association rules with confidence above minConfidence.
getAsymmetricAlpha() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Alias for getAsymmetricDocConcentration
getAsymmetricDocConcentration() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
getAttr(String) - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its name.
getAttr(int) - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its index.
getAvroSchema() - 类 中的方法org.apache.spark.SparkConf
Gets all the avro schemas in the configuration used in the generic Avro record serializer
getBatchingTimeout(SparkConf) - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
How long we will wait for the wrappedLog in the BatchedWriteAheadLog to write the records before we fail the write attempt to unblock receivers.
getBernoulliSamplingFunction(RDD<Tuple2<K, V>>, Map<K, Object>, boolean, long) - 类 中的静态方法org.apache.spark.util.random.StratifiedSamplingUtils
Return the per partition sampling function used for sampling without replacement.
getBeta() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
getBeta() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Alias for getTopicConcentration
getBinary() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
 
getBinary() - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
getBinary(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getBinary(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getBinary(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getBinary(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the binary type value for rowId.
getBinaryWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getBinaryWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getBlockSize() - 接口 中的方法org.apache.spark.ml.classification.MultilayerPerceptronParams
 
GetBlockStatus(BlockId, boolean) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
 
GetBlockStatus$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetBlockStatus$
 
getBoolean(String, boolean) - 类 中的方法org.apache.spark.SparkConf
Get a parameter as a boolean, falling back to a default if not set
getBoolean(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i as a primitive boolean.
getBoolean(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a Boolean.
getBoolean(String, boolean) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
Returns the boolean value to which the specified key is mapped, or defaultValue if there is no mapping for the key.
getBoolean(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getBoolean(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getBoolean(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getBoolean(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the boolean type value for rowId.
getBooleanArray(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a Boolean array.
getBooleans(int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Gets boolean type values from [rowId, rowId + count).
getBooleanWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getBooleanWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getBucketLength() - 接口 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHParams
 
getBuilder() - 类 中的方法org.apache.spark.storage.memory.DeserializedValuesHolder
 
getBuilder() - 类 中的方法org.apache.spark.storage.memory.SerializedValuesHolder
 
getBuilder() - 接口 中的方法org.apache.spark.storage.memory.ValuesHolder
Note: After this method is called, the ValuesHolder is invalid, we can't store data and get estimate size again.
getByte(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i as a primitive byte.
getByte(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getByte(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getByte(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getByte(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the byte type value for rowId.
getBytes(int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Gets byte type values from [rowId, rowId + count).
getByteWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getByteWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getCachedBlockManagerId(BlockManagerId) - 类 中的静态方法org.apache.spark.storage.BlockManagerId
 
getCachedMetadata(String) - 类 中的静态方法org.apache.spark.rdd.HadoopRDD
The three methods below are helpers for accessing the local map, a property of the SparkEnv of the local process.
getCacheNodeIds() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
 
getCallSite(Function1<String, Object>) - 类 中的静态方法org.apache.spark.util.Utils
When called inside a class in the spark package, returns the name of the user code class (outside the spark package) that called into Spark, as well as which Spark method they called.
getCaseSensitive() - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
getCatalystType(int, String, int, MetadataBuilder) - 类 中的方法org.apache.spark.sql.jdbc.AggregatedDialect
 
getCatalystType(int, String, int, MetadataBuilder) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
getCatalystType(int, String, int, MetadataBuilder) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
getCatalystType(int, String, int, MetadataBuilder) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
Get the custom datatype mapping for the given jdbc meta information.
getCatalystType(int, String, int, MetadataBuilder) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
getCatalystType(int, String, int, MetadataBuilder) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
getCatalystType(int, String, int, MetadataBuilder) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
getCatalystType(int, String, int, MetadataBuilder) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
getCatalystType(int, String, int, MetadataBuilder) - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
getCatalystType(int, String, int, MetadataBuilder) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
getCategoricalCols() - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
getCategoricalFeatures(StructField) - 类 中的静态方法org.apache.spark.ml.util.MetadataUtils
Examine a schema to identify categorical (Binary and Nominal) features.
getCategoricalFeaturesInfo() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getCensorCol() - 接口 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionParams
 
getCheckpointDir() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
getCheckpointDir() - 类 中的方法org.apache.spark.SparkContext
 
getCheckpointFile() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Gets the name of the file to which this RDD was checkpointed
getCheckpointFile() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
getCheckpointFile() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
getCheckpointFile() - 类 中的方法org.apache.spark.rdd.RDD
Gets the name of the directory to which this RDD was checkpointed.
getCheckpointFiles() - 类 中的方法org.apache.spark.graphx.Graph
Gets the name of the files to which this Graph was checkpointed.
getCheckpointFiles() - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
getCheckpointFiles() - 类 中的方法org.apache.spark.ml.clustering.DistributedLDAModel
:: DeveloperApi :: If using checkpointing and LDA.keepLastCheckpoint is set to true, then there may be saved checkpoint files.
getCheckpointInterval() - 接口 中的方法org.apache.spark.ml.param.shared.HasCheckpointInterval
 
getCheckpointInterval() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Period (in iterations) between checkpoints.
getCheckpointInterval() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getChild(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getChild(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
 
getClassifier() - 接口 中的方法org.apache.spark.ml.classification.OneVsRestParams
 
getClusterConfig(SparkConf, String) - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenSparkConf
 
getColdStartStrategy() - 接口 中的方法org.apache.spark.ml.recommendation.ALSModelParams
 
getCollectSubModels() - 接口 中的方法org.apache.spark.ml.param.shared.HasCollectSubModels
 
getColumnName(Seq<Object>, StructType) - 类 中的静态方法org.apache.spark.sql.util.SchemaUtils
Gets the name of the column in the given position.
getCombOp() - 类 中的静态方法org.apache.spark.util.random.StratifiedSamplingUtils
Returns the function used combine results returned by seqOp from different partitions.
getComment() - 类 中的方法org.apache.spark.sql.types.StructField
Return the comment of this StructField.
getConf() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Return a copy of this JavaSparkContext's configuration.
getConf() - 接口 中的方法org.apache.spark.input.Configurable
 
getConf() - 类 中的方法org.apache.spark.rdd.HadoopRDD
 
getConf() - 类 中的方法org.apache.spark.rdd.NewHadoopRDD
 
getConf() - 类 中的方法org.apache.spark.SparkContext
Return a copy of this SparkContext's configuration.
getConf(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the configuration for the given key in the current session.
getConf(String) - 类 中的方法org.apache.spark.sql.SQLContext
Return the value of Spark SQL configuration property for the given key.
getConf(String, String) - 类 中的方法org.apache.spark.sql.SQLContext
Return the value of Spark SQL configuration property for the given key.
getConfiguration() - 类 中的方法org.apache.spark.input.PortableDataStream
 
getConfiguredLocalDirs(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
Return the configured local directories where Spark can write files.
getConnection() - 接口 中的方法org.apache.spark.rdd.JdbcRDD.ConnectionFactory
 
getContextOrSparkClassLoader() - 类 中的静态方法org.apache.spark.util.Utils
Get the Context ClassLoader on this thread or, if not present, the ClassLoader that loaded Spark.
getConvergenceTol() - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Return the largest change in log-likelihood at which convergence is considered to have occurred.
getCorrelationFromName(String) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.Correlations
 
getCount() - 类 中的方法org.apache.spark.storage.CountingWritableChannel
 
getCurrentProcessingTimeMs() - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Get the current processing time as milliseconds in epoch time.
getCurrentUserGroups(SparkConf, String) - 类 中的静态方法org.apache.spark.util.Utils
 
getCurrentUserName() - 类 中的静态方法org.apache.spark.util.Utils
Returns the current user name.
getCurrentWatermarkMs() - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Get the current event time watermark as milliseconds in epoch time.
getData(Row) - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
Gets the image data
getDatabase(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Get the database with the specified name.
getDatabase(String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the metadata for specified database, throwing an exception if it doesn't exist
getDate(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of date type as java.sql.Date.
getDateWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getDateWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getDecimal(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of decimal type as java.math.BigDecimal.
getDecimal(int, int, int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getDecimal(int, int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getDecimal(int, int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getDecimal(int, int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the decimal type value for rowId.
getDecimalWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getDecimalWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getDefault(Param<T>) - 接口 中的方法org.apache.spark.ml.param.Params
Gets the default value of a parameter.
getDefaultPropertiesFile(Map<String, String>) - 类 中的静态方法org.apache.spark.util.Utils
Return the path of the default Spark properties file.
getDefaultSession() - 类 中的静态方法org.apache.spark.sql.SparkSession
Returns the default SparkSession that is returned by the builder.
getDegree() - 类 中的方法org.apache.spark.ml.feature.PolynomialExpansion
 
getDenseSizeInBytes() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Gets the size of the dense representation of this `Matrix`.
getDependencies() - 类 中的方法org.apache.spark.rdd.CoGroupedRDD
 
getDependencies() - 类 中的方法org.apache.spark.rdd.ShuffledRDD
 
getDependencies() - 类 中的方法org.apache.spark.rdd.UnionRDD
 
getDeprecatedConfig(String, Map<String, String>) - 类 中的静态方法org.apache.spark.SparkConf
Looks for available deprecated keys for the given config option, and return the first value available.
getDistanceMeasure() - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
getDistanceMeasure() - 接口 中的方法org.apache.spark.ml.param.shared.HasDistanceMeasure
 
getDistanceMeasure() - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
The distance suite used by the algorithm.
getDistanceMeasure() - 类 中的方法org.apache.spark.mllib.clustering.KMeans
The distance suite used by the algorithm.
getDistributions() - 类 中的方法org.apache.spark.status.LiveRDD
 
getDocConcentration() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getDocConcentration() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
getDouble(String, double) - 类 中的方法org.apache.spark.SparkConf
Get a parameter as a double, falling back to a default if not ste
getDouble(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i as a primitive double.
getDouble(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a Double.
getDouble(String, double) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
Returns the double value to which the specified key is mapped, or defaultValue if there is no mapping for the key.
getDouble(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getDouble(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getDouble(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getDouble(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the double type value for rowId.
getDoubleArray(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a Double array.
getDoubles(int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Gets double type values from [rowId, rowId + count).
getDoubleWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getDoubleWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getDriverAttributes() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
Get the attributes on driver.
getDriverLogUrls() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
Get the URLs for the driver logs.
getDropLast() - 接口 中的方法org.apache.spark.ml.feature.OneHotEncoderBase
 
getDstCol() - 接口 中的方法org.apache.spark.ml.clustering.PowerIterationClusteringParams
 
getDynamicAllocationInitialExecutors(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
Return the initial number of executors for dynamic allocation.
getElasticNetParam() - 接口 中的方法org.apache.spark.ml.param.shared.HasElasticNetParam
 
getEndTimeEpoch() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
getEps() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
getEpsilon() - 接口 中的方法org.apache.spark.ml.regression.LinearRegressionParams
 
getEpsilon() - 类 中的方法org.apache.spark.mllib.clustering.KMeans
The distance threshold within which we've consider centers to have converged.
getError() - 接口 中的方法org.apache.spark.launcher.SparkAppHandle
If the application failed due to an error, return the underlying error.
getEstimator() - 接口 中的方法org.apache.spark.ml.tuning.ValidatorParams
 
getEstimatorParamMaps() - 接口 中的方法org.apache.spark.ml.tuning.ValidatorParams
 
getEvaluator() - 接口 中的方法org.apache.spark.ml.tuning.ValidatorParams
 
getExecutionContext() - 接口 中的方法org.apache.spark.ml.param.shared.HasParallelism
Create a new execution context with a thread-pool that has a maximum number of threads set to the value of parallelism.
GetExecutorEndpointRef(String) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
 
GetExecutorEndpointRef$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef$
 
getExecutorEnv() - 类 中的方法org.apache.spark.SparkConf
Get all executor environment variables set on this SparkConf
getExecutorIds() - 接口 中的方法org.apache.spark.ExecutorAllocationClient
Get the list of currently active executors
getExecutorInfos() - 类 中的方法org.apache.spark.SparkStatusTracker
Returns information of all known executors, including host, port, cacheSize, numRunningTasks and memory metrics.
GetExecutorLossReason(String) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
 
GetExecutorLossReason$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason$
 
getExecutorMemoryStatus() - 类 中的方法org.apache.spark.SparkContext
Return a map from the slave to the max memory available for caching and the remaining memory available for caching.
getExternalScratchDir(URI, Configuration, String) - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
getExternalTmpPath(SparkSession, Configuration, Path) - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
getExtTmpPathRelTo(Path, Configuration, String) - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
getFamily() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
 
getFamily() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
 
getFdr() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
 
getFeatureIndex() - 接口 中的方法org.apache.spark.ml.regression.IsotonicRegressionBase
 
getFeatureIndicesFromNames(StructField, String[]) - 类 中的静态方法org.apache.spark.ml.util.MetadataUtils
Takes a Vector column and a list of feature names, and returns the corresponding list of feature indices in the column, in order.
getFeatures() - 类 中的方法org.apache.spark.ml.feature.LabeledPoint
 
getFeatures() - 类 中的方法org.apache.spark.mllib.regression.LabeledPoint
 
getFeaturesAndLabels(RFormulaModel, Dataset<?>) - 类 中的静态方法org.apache.spark.ml.r.RWrapperUtils
Get the feature names and original labels from the schema of DataFrame transformed by RFormulaModel.
getFeaturesCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasFeaturesCol
 
getFeatureSubsetStrategy() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleParams
 
getField(String) - 类 中的方法org.apache.spark.sql.Column
An expression that gets a field by name in a StructType.
getFileLength(File, SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
Return the file length, if the file is compressed it returns the uncompressed file length.
getFileReader(String, Option<Configuration>, boolean) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileOperator
Retrieves an ORC file reader from a given path.
getFileSegmentLocations(String, long, long, Configuration) - 类 中的静态方法org.apache.spark.streaming.util.HdfsUtils
Get the locations of the HDFS blocks containing the given file segment.
getFileSystemForPath(Path, Configuration) - 类 中的静态方法org.apache.spark.streaming.util.HdfsUtils
 
getFinalStorageLevel() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
 
getFinalValue() - 类 中的方法org.apache.spark.partial.PartialResult
Blocking method to wait for and return the final value.
getFitIntercept() - 接口 中的方法org.apache.spark.ml.param.shared.HasFitIntercept
 
getFloat(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i as a primitive float.
getFloat(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getFloat(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getFloat(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getFloat(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the float type value for rowId.
getFloats(int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Gets float type values from [rowId, rowId + count).
getFloatWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getFloatWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getForceIndexLabel() - 接口 中的方法org.apache.spark.ml.feature.RFormulaBase
 
getFormattedClassName(Object) - 类 中的静态方法org.apache.spark.util.Utils
Return the class name of the given object, removing all dollar signs
getFormula() - 接口 中的方法org.apache.spark.ml.feature.RFormulaBase
 
getFpr() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
 
getFunction(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Get the function with the specified name.
getFunction(String, String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Get the function with the specified name.
getFunction(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Return an existing function in the database, assuming it exists.
getFunctionOption(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Return an existing function in the database, or None if it doesn't exist.
getFwe() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
 
getGaps() - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
getGroups(String) - 接口 中的方法org.apache.spark.security.GroupMappingServiceProvider
Get the groups the user belongs to.
getHadoopFileSystem(URI, Configuration) - 类 中的静态方法org.apache.spark.util.Utils
Return a Hadoop FileSystem with the scheme encoded in the given path.
getHadoopFileSystem(String, Configuration) - 类 中的静态方法org.apache.spark.util.Utils
Return a Hadoop FileSystem with the scheme encoded in the given path.
getHandleInvalid() - 接口 中的方法org.apache.spark.ml.param.shared.HasHandleInvalid
 
getHeight(Row) - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
Gets the height of the image
getHiveWriteCompression(TableDesc, SQLConf) - 类 中的静态方法org.apache.spark.sql.hive.execution.HiveOptions
 
getImplicitPrefs() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
 
getImpurity() - 接口 中的方法org.apache.spark.ml.tree.HasVarianceImpurity
 
getImpurity() - 接口 中的方法org.apache.spark.ml.tree.TreeClassifierParams
 
getImpurity() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getIndices() - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
getInitializationMode() - 类 中的方法org.apache.spark.mllib.clustering.KMeans
The initialization algorithm.
getInitializationSteps() - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Number of steps for the k-means|| initialization mode
getInitialModel() - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Return the user supplied initial GMM, if supplied
getInitialTargetExecutorNumber(SparkConf, int) - 类 中的静态方法org.apache.spark.scheduler.cluster.SchedulerBackendUtils
Getting the initial target number of executors depends on whether dynamic allocation is enabled.
getInitialWeights() - 接口 中的方法org.apache.spark.ml.classification.MultilayerPerceptronParams
 
getInitMode() - 接口 中的方法org.apache.spark.ml.clustering.KMeansParams
 
getInitMode() - 接口 中的方法org.apache.spark.ml.clustering.PowerIterationClusteringParams
 
getInitSteps() - 接口 中的方法org.apache.spark.ml.clustering.KMeansParams
 
getInOutCols() - 接口 中的方法org.apache.spark.ml.feature.ImputerParams
Returns the input and output column names corresponding in pair.
getInOutCols() - 接口 中的方法org.apache.spark.ml.feature.OneHotEncoderBase
Returns the input and output column names corresponding in pair.
getInOutCols() - 接口 中的方法org.apache.spark.ml.feature.StringIndexerBase
Returns the input and output column names corresponding in pair.
getInputCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasInputCol
 
getInputCols() - 接口 中的方法org.apache.spark.ml.param.shared.HasInputCols
 
getInputFilePath() - 类 中的静态方法org.apache.spark.rdd.InputFileBlockHolder
Returns the holding file name or empty string if it is unknown.
getInputStream(String, Configuration) - 类 中的静态方法org.apache.spark.streaming.util.HdfsUtils
 
getInstant(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of date type as java.time.Instant.
getInt(String, int) - 类 中的方法org.apache.spark.SparkConf
Get a parameter as an integer, falling back to a default if not set
getInt(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i as a primitive int.
getInt(String, int) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
Returns the integer value to which the specified key is mapped, or defaultValue if there is no mapping for the key.
getInt(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getInt(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getInt(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getInt(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the int type value for rowId.
getIntermediateStorageLevel() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
 
getInterval(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getInterval(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getInterval(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the calendar interval type value for rowId.
getInts(int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Gets int type values from [rowId, rowId + count).
getIntWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getIntWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getInverse() - 类 中的方法org.apache.spark.ml.feature.DCT
 
getIsExperiment() - 类 中的方法org.apache.spark.mllib.stat.test.BinarySample
 
getIsotonic() - 接口 中的方法org.apache.spark.ml.regression.IsotonicRegressionBase
 
getItem(Object) - 类 中的方法org.apache.spark.sql.Column
An expression that gets an item at position ordinal out of an array, or gets a value by key key in a MapType.
getItemCol() - 接口 中的方法org.apache.spark.ml.recommendation.ALSModelParams
 
getItemsCol() - 接口 中的方法org.apache.spark.ml.fpm.FPGrowthParams
 
getIteratorSize(Iterator<?>) - 类 中的静态方法org.apache.spark.util.Utils
Counts the number of elements of an iterator using a while loop rather than calling TraversableOnce.size() because it uses a for loop, which is slightly slower in the current version of Scala.
getIteratorZipWithIndex(Iterator<T>, long) - 类 中的静态方法org.apache.spark.util.Utils
Generate a zipWithIndex iterator, avoid index value overflowing problem in scala's zipWithIndex
getJavaMap(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of array type as a java.util.Map.
getJavaSparkContext(SparkSession) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
getJDBCType(DataType) - 类 中的方法org.apache.spark.sql.jdbc.AggregatedDialect
 
getJDBCType(DataType) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
getJDBCType(DataType) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
getJDBCType(DataType) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
Retrieve the jdbc / sql type for a given datatype.
getJDBCType(DataType) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
getJDBCType(DataType) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
getJDBCType(DataType) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
getJDBCType(DataType) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
getJDBCType(DataType) - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
getJDBCType(DataType) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
getJobIdsForGroup(String) - 类 中的方法org.apache.spark.api.java.JavaSparkStatusTracker
Return a list of all known jobs in a particular job group.
getJobIdsForGroup(String) - 类 中的方法org.apache.spark.SparkStatusTracker
Return a list of all known jobs in a particular job group.
getJobInfo(int) - 类 中的方法org.apache.spark.api.java.JavaSparkStatusTracker
Returns job information, or null if the job info could not be found or was garbage collected.
getJobInfo(int) - 类 中的方法org.apache.spark.SparkStatusTracker
Returns job information, or None if the job info could not be found or was garbage collected.
getK() - 接口 中的方法org.apache.spark.ml.clustering.BisectingKMeansParams
 
getK() - 接口 中的方法org.apache.spark.ml.clustering.GaussianMixtureParams
 
getK() - 接口 中的方法org.apache.spark.ml.clustering.KMeansParams
 
getK() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getK() - 接口 中的方法org.apache.spark.ml.clustering.PowerIterationClusteringParams
 
getK() - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
getK() - 接口 中的方法org.apache.spark.ml.feature.PCAParams
 
getK() - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Gets the desired number of leaf clusters.
getK() - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Return the number of Gaussians in the mixture model
getK() - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Number of clusters to create (k).
getK() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Number of topics to infer, i.e., the number of soft cluster centers.
getKappa() - 类 中的方法org.apache.spark.mllib.clustering.OnlineLDAOptimizer
Learning rate: exponential decay rate
getKeepLastCheckpoint() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getKeepLastCheckpoint() - 类 中的方法org.apache.spark.mllib.clustering.EMLDAOptimizer
If using checkpointing, this indicates whether to keep the last checkpoint (vs clean up).
getKeytabJaasParams(String, String, String) - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenUtil
 
getKrb5LoginModuleName() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenUtil
Krb5LoginModule package vary in different JVMs.
getLabel() - 类 中的方法org.apache.spark.ml.feature.LabeledPoint
 
getLabel() - 类 中的方法org.apache.spark.mllib.regression.LabeledPoint
 
getLabelCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasLabelCol
 
getLabels() - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
getLambda() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayes
Get the smoothing parameter.
getLastUpdatedEpoch() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
getLayers() - 接口 中的方法org.apache.spark.ml.classification.MultilayerPerceptronParams
 
getLDAModel(double[]) - 接口 中的方法org.apache.spark.mllib.clustering.LDAOptimizer
 
getLeafCol() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
 
getLearningDecay() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getLearningOffset() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getLearningRate() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getLeastGroupHash(String) - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
Gets the least element of the list associated with key in groupHash The returned PartitionGroup is the least loaded of all groups that represent the machine "key"
getLength() - 类 中的静态方法org.apache.spark.rdd.InputFileBlockHolder
Returns the length of the block being read, or -1 if it is unknown.
getLink() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
 
getLinkPower() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
 
getLinkPredictionCol() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
 
getList(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of array type as java.util.List.
getLocalDate(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of date type as java.time.LocalDate.
getLocalDir(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
Get the path of a temporary directory.
getLocale() - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
getLocalProperty(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get a local property set in this thread, or null if it is missing.
getLocalProperty(String) - 类 中的方法org.apache.spark.BarrierTaskContext
 
getLocalProperty(String) - 类 中的方法org.apache.spark.SparkContext
Get a local property set in this thread, or null if it is missing.
getLocalProperty(String) - 类 中的方法org.apache.spark.TaskContext
Get a local property set upstream in the driver, or null if it is missing.
getLocalUserJarsForShell(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
Return the local jar files which will be added to REPL's classpath.
GetLocations(BlockId) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetLocations
 
GetLocations$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetLocations$
 
GetLocationsAndStatus(BlockId, String) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
 
GetLocationsAndStatus$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus$
 
GetLocationsMultipleBlockIds(BlockId[]) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
 
GetLocationsMultipleBlockIds$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds$
 
getLong(String, long) - 类 中的方法org.apache.spark.SparkConf
Get a parameter as a long, falling back to a default if not set
getLong(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i as a primitive long.
getLong(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a Long.
getLong(String, long) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
Returns the long value to which the specified key is mapped, or defaultValue if there is no mapping for the key.
getLong(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getLong(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getLong(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getLong(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the long type value for rowId.
getLongArray(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a Long array.
getLongs(int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Gets long type values from [rowId, rowId + count).
getLongWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getLongWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getLoss() - 接口 中的方法org.apache.spark.ml.param.shared.HasLoss
 
getLoss() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getLossType() - 接口 中的方法org.apache.spark.ml.tree.GBTClassifierParams
 
getLossType() - 接口 中的方法org.apache.spark.ml.tree.GBTRegressorParams
 
getLower() - 接口 中的方法org.apache.spark.ml.feature.RobustScalerParams
 
getLowerBound(double, long, double) - 类 中的静态方法org.apache.spark.util.random.BinomialBounds
Returns a threshold p such that if we conduct n Bernoulli trials with success rate = p, it is very unlikely to have more than fraction * n successes.
getLowerBound(double) - 类 中的静态方法org.apache.spark.util.random.PoissonBounds
Returns a lambda such that Pr[X > s] is very small, where X ~ Pois(lambda).
getLowerBoundsOnCoefficients() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
 
getLowerBoundsOnIntercepts() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
 
getMap(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of map type as a Scala Map.
getMap(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getMap(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getMap(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getMap(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the map type value for rowId.
GetMatchingBlockIds(Function1<BlockId, Object>, boolean) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
 
GetMatchingBlockIds$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds$
 
getMax() - 接口 中的方法org.apache.spark.ml.feature.MinMaxScalerParams
 
getMaxBins() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
 
getMaxBins() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getMaxCategories() - 接口 中的方法org.apache.spark.ml.feature.VectorIndexerParams
 
getMaxDepth() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
 
getMaxDepth() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getMaxDF() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
 
getMaxFailures(SparkConf, boolean) - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
getMaxIter() - 接口 中的方法org.apache.spark.ml.param.shared.HasMaxIter
 
getMaxIterations() - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Gets the max number of k-means iterations to split clusters.
getMaxIterations() - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Return the maximum number of iterations allowed
getMaxIterations() - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Maximum number of iterations allowed.
getMaxIterations() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Maximum number of iterations allowed.
getMaxLocalProjDBSize() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
getMaxLocalProjDBSize() - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan
Gets the maximum number of items allowed in a projected database before local processing.
getMaxMemoryInMB() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
 
getMaxMemoryInMB() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getMaxPatternLength() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
getMaxPatternLength() - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan
Gets the maximal pattern length (i.e. the length of the longest sequential pattern to consider.
getMaxSentenceLength() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
 
GetMemoryStatus$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus$
 
getMessage() - 异常错误 中的方法org.apache.spark.sql.AnalysisException
 
getMetadata(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a Metadata.
getMetadataArray(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a Metadata array.
getMetricLabel() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
getMetricLabel() - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
getMetricName() - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
getMetricName() - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
getMetricName() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
getMetricName() - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
getMetricName() - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
getMetricName() - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
getMetricsSources(String) - 类 中的方法org.apache.spark.BarrierTaskContext
 
getMetricsSources(String) - 类 中的方法org.apache.spark.TaskContext
::DeveloperApi:: Returns all metrics sources with the given name which are associated with the instance which runs the task.
getMetricValue(MemoryManager) - 接口 中的方法org.apache.spark.metrics.SingleValueExecutorMetricType
 
getMetricValues(MemoryManager) - 接口 中的方法org.apache.spark.metrics.ExecutorMetricType
 
getMetricValues(MemoryManager) - 接口 中的方法org.apache.spark.metrics.SingleValueExecutorMetricType
 
getMin() - 接口 中的方法org.apache.spark.ml.feature.MinMaxScalerParams
 
getMinConfidence() - 接口 中的方法org.apache.spark.ml.fpm.FPGrowthParams
 
getMinCount() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
 
getMinDF() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
 
getMinDivisibleClusterSize() - 接口 中的方法org.apache.spark.ml.clustering.BisectingKMeansParams
 
getMinDivisibleClusterSize() - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Gets the minimum number of points (if greater than or equal to 1.0) or the minimum proportion of points (if less than 1.0) of a divisible cluster.
getMinDocFreq() - 接口 中的方法org.apache.spark.ml.feature.IDFBase
 
getMiniBatchFraction() - 类 中的方法org.apache.spark.mllib.clustering.OnlineLDAOptimizer
Mini-batch fraction, which sets the fraction of document sampled and used in each iteration
getMinInfoGain() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
 
getMinInfoGain() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getMinInstancesPerNode() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
 
getMinInstancesPerNode() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getMinSupport() - 接口 中的方法org.apache.spark.ml.fpm.FPGrowthParams
 
getMinSupport() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
getMinSupport() - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan
Get the minimal support (i.e. the frequency of occurrence before a pattern is considered frequent).
getMinTF() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
 
getMinTokenLength() - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
getMinWeightFractionPerNode() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
 
getMinWeightFractionPerNode() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getMissingValue() - 接口 中的方法org.apache.spark.ml.feature.ImputerParams
 
getMode(Row) - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
Gets the OpenCV representation as an int
getModelType() - 接口 中的方法org.apache.spark.ml.classification.NaiveBayesParams
 
getModelType() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayes
Get the model type.
getN() - 类 中的方法org.apache.spark.ml.feature.NGram
 
getNames() - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
getNChannels(Row) - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
Gets the number of channels in the image
getNode(int, Node) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Traces down from a root node to get the node with the given node index.
getNonnegative() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
 
getNumBins() - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
getNumBuckets() - 接口 中的方法org.apache.spark.ml.feature.QuantileDiscretizerBase
 
getNumBucketsArray() - 接口 中的方法org.apache.spark.ml.feature.QuantileDiscretizerBase
 
getNumBytesWritten() - 接口 中的方法org.apache.spark.shuffle.api.ShufflePartitionWriter
Returns the number of bytes written either by this writer's output stream opened by ShufflePartitionWriter.openStream() or the byte channel opened by ShufflePartitionWriter.openChannelWrapper().
getNumClasses(StructField) - 类 中的静态方法org.apache.spark.ml.util.MetadataUtils
Examine a schema to identify the number of classes in a label column.
getNumClasses() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getNumFeatures() - 接口 中的方法org.apache.spark.ml.param.shared.HasNumFeatures
 
getNumFeatures() - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
The dimension of training features.
getNumFolds() - 接口 中的方法org.apache.spark.ml.tuning.CrossValidatorParams
 
getNumHashTables() - 接口 中的方法org.apache.spark.ml.feature.LSHParams
 
getNumItemBlocks() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
 
getNumIterations() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getNumObjFields() - 类 中的方法org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
 
getNumPartitions() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return the number of partitions in this RDD.
getNumPartitions() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
 
getNumPartitions() - 接口 中的方法org.apache.spark.ml.fpm.FPGrowthParams
 
getNumPartitions() - 类 中的方法org.apache.spark.rdd.RDD
Returns the number of partitions of this RDD.
getNumTopFeatures() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
 
getNumTrees() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
Number of trees in ensemble
getNumTrees() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
Number of trees in ensemble
getNumTrees() - 接口 中的方法org.apache.spark.ml.tree.RandomForestParams
 
getNumUserBlocks() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
 
getNumValues() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Get the number of values, either from numValues or from values.
getObjectInspector(String, Option<Configuration>) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileOperator
 
getObjFieldValues(Object, Object[]) - 类 中的方法org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
 
getOffset() - 接口 中的方法org.apache.spark.sql.connector.read.streaming.ContinuousPartitionReader
Get the offset of the current record, or the start offset if no records have been read.
getOffsetCol() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
 
getOldBoostingStrategy(Map<Object, Object>, Enumeration.Value) - 接口 中的方法org.apache.spark.ml.tree.GBTParams
(private[ml]) Create a BoostingStrategy instance to use with the old API.
getOldDocConcentration() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
Get docConcentration used by spark.mllib LDA
getOldImpurity() - 接口 中的方法org.apache.spark.ml.tree.HasVarianceImpurity
Convert new impurity to old impurity.
getOldImpurity() - 接口 中的方法org.apache.spark.ml.tree.TreeClassifierParams
Convert new impurity to old impurity.
getOldLossType() - 接口 中的方法org.apache.spark.ml.tree.GBTClassifierParams
(private[ml]) Convert new loss to old loss.
getOldLossType() - 接口 中的方法org.apache.spark.ml.tree.GBTParams
Get old Gradient Boosting Loss type
getOldLossType() - 接口 中的方法org.apache.spark.ml.tree.GBTRegressorParams
(private[ml]) Convert new loss to old loss.
getOldOptimizer() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getOldStrategy(Map<Object, Object>, int, Enumeration.Value, Impurity, double) - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
(private[ml]) Create a Strategy instance to use with the old API.
getOldStrategy(Map<Object, Object>, int, Enumeration.Value, Impurity) - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleParams
Create a Strategy instance to use with the old API.
getOldTopicConcentration() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
Get topicConcentration used by spark.mllib LDA
getOptimizeDocConcentration() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getOptimizeDocConcentration() - 类 中的方法org.apache.spark.mllib.clustering.OnlineLDAOptimizer
Optimize docConcentration, indicates whether docConcentration (Dirichlet parameter for document-topic distribution) will be optimized during training.
getOptimizer() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getOptimizer() - 类 中的方法org.apache.spark.mllib.clustering.LDA
:: DeveloperApi :: LDAOptimizer used to perform the actual calculation
getOption(String) - 类 中的方法org.apache.spark.SparkConf
Get a parameter as an Option
getOption(String) - 类 中的方法org.apache.spark.sql.RuntimeConfig
Returns the value of Spark runtime configuration property for the given key.
getOption() - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Get the state value as a scala Option.
getOption() - 类 中的方法org.apache.spark.streaming.State
Get the state as a scala.Option.
getOrCreate(SparkConf) - 类 中的静态方法org.apache.spark.SparkContext
This function may be used to get or instantiate a SparkContext and register it as a singleton object.
getOrCreate() - 类 中的静态方法org.apache.spark.SparkContext
This function may be used to get or instantiate a SparkContext and register it as a singleton object.
getOrCreate() - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Gets an existing SparkSession or, if there is no existing one, creates a new one based on the options set in this builder.
getOrCreate(String, Function0<JavaStreamingContext>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaStreamingContext
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
getOrCreate(String, Function0<JavaStreamingContext>, Configuration) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaStreamingContext
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
getOrCreate(String, Function0<JavaStreamingContext>, Configuration, boolean) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaStreamingContext
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
getOrCreate(String, Function0<StreamingContext>, Configuration, boolean) - 类 中的静态方法org.apache.spark.streaming.StreamingContext
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
getOrCreateSparkSession(JavaSparkContext, Map<Object, Object>, boolean) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
getOrDefault(Param<T>) - 接口 中的方法org.apache.spark.ml.param.Params
Gets the value of a param in the embedded param map or its default value.
getOrDiscoverAllResources(SparkConf, String, Option<String>) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
Gets all allocated resource information for the input component from input resources file and discover the remaining via discovery scripts.
getOrElse(Param<T>, T) - 类 中的方法org.apache.spark.ml.param.ParamMap
Returns the value associated with a param or a default value.
getOrigin(Row) - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
Gets the origin of the image
getOutputAttrGroupFromData(Dataset<?>, Seq<String>, Seq<String>, boolean) - 类 中的静态方法org.apache.spark.ml.feature.OneHotEncoderCommon
This method is called when we want to generate AttributeGroup from actual data for one-hot encoder.
getOutputCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasOutputCol
 
getOutputCols() - 接口 中的方法org.apache.spark.ml.param.shared.HasOutputCols
 
getOutputSize(int) - 接口 中的方法org.apache.spark.ml.ann.Layer
Returns the output size given the input size (not counting the stack size).
getOutputStream(String, Configuration) - 类 中的静态方法org.apache.spark.streaming.util.HdfsUtils
 
getP() - 类 中的方法org.apache.spark.ml.feature.Normalizer
 
getParallelism() - 接口 中的方法org.apache.spark.ml.param.shared.HasParallelism
 
getParam(String) - 接口 中的方法org.apache.spark.ml.param.Params
Gets a param by its name.
getParameter(String) - 类 中的方法org.apache.spark.ui.XssSafeRequest
 
getParameterMap() - 类 中的方法org.apache.spark.ui.XssSafeRequest
 
getParameterNames() - 类 中的方法org.apache.spark.ui.XssSafeRequest
 
getParameterValues(String) - 类 中的方法org.apache.spark.ui.XssSafeRequest
 
getParents(int) - 类 中的方法org.apache.spark.NarrowDependency
Get the parent partitions for a child partition.
getParents(int) - 类 中的方法org.apache.spark.OneToOneDependency
 
getParents(int) - 类 中的方法org.apache.spark.RangeDependency
 
getPartition(long, long, int) - 类 中的方法org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
 
getPartition(long, long, int) - 类 中的方法org.apache.spark.graphx.PartitionStrategy.EdgePartition1D$
 
getPartition(long, long, int) - 类 中的方法org.apache.spark.graphx.PartitionStrategy.EdgePartition2D$
 
getPartition(long, long, int) - 接口 中的方法org.apache.spark.graphx.PartitionStrategy
Returns the partition number for a given edge.
getPartition(long, long, int) - 类 中的方法org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
 
getPartition(Object) - 类 中的方法org.apache.spark.HashPartitioner
 
getPartition(Object) - 类 中的方法org.apache.spark.Partitioner
 
getPartition(Object) - 类 中的方法org.apache.spark.RangePartitioner
 
getPartition(String, String, Map<String, String>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the specified partition, or throws `NoSuchPartitionException`.
getPartitionId() - 类 中的静态方法org.apache.spark.TaskContext
Returns the partition id of currently active TaskContext.
getPartitionNames(CatalogTable, Option<Map<String, String>>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the partition names for the given table that match the supplied partition spec.
getPartitionOption(String, String, Map<String, String>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the specified partition or None if it does not exist.
getPartitionOption(CatalogTable, Map<String, String>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the specified partition or None if it does not exist.
getPartitions() - 类 中的方法org.apache.spark.api.r.BaseRRDD
 
getPartitions() - 类 中的方法org.apache.spark.rdd.CoGroupedRDD
 
getPartitions() - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
getPartitions() - 类 中的方法org.apache.spark.rdd.HadoopRDD
 
getPartitions() - 类 中的方法org.apache.spark.rdd.JdbcRDD
 
getPartitions() - 类 中的方法org.apache.spark.rdd.NewHadoopRDD
 
getPartitions() - 类 中的方法org.apache.spark.rdd.ShuffledRDD
 
getPartitions() - 类 中的方法org.apache.spark.rdd.UnionRDD
 
getPartitions(String, String, Option<Map<String, String>>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the partitions for the given table that match the supplied partition spec.
getPartitions(CatalogTable, Option<Map<String, String>>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the partitions for the given table that match the supplied partition spec.
getPartitions() - 类 中的方法org.apache.spark.status.LiveRDD
 
getPartitionsByFilter(CatalogTable, Seq<Expression>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns partitions filtered by predicates for the given table.
getPartitionTableScan(Expression, LogicalPlan) - 类 中的静态方法org.apache.spark.sql.dynamicpruning.PartitionPruning
Search the partitioned table scan for a given partition column in a logical plan
getPartitionWriter(int) - 接口 中的方法org.apache.spark.shuffle.api.ShuffleMapOutputWriter
Creates a writer that can open an output stream to persist bytes targeted for a given reduce partition id.
getPath() - 类 中的方法org.apache.spark.input.PortableDataStream
 
getPattern() - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
GetPeers(BlockManagerId) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetPeers
 
GetPeers$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetPeers$
 
getPercentile() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
 
getPersistentRDDs() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Returns a Java map of JavaRDDs that have marked themselves as persistent via cache() call.
getPersistentRDDs() - 类 中的方法org.apache.spark.SparkContext
Returns an immutable map of RDDs that have marked themselves as persistent via cache() call.
getPmml() - 接口 中的方法org.apache.spark.mllib.pmml.export.PMMLModelExport
 
getPoissonSamplingFunction(RDD<Tuple2<K, V>>, Map<K, Object>, boolean, long, ClassTag<K>, ClassTag<V>) - 类 中的静态方法org.apache.spark.util.random.StratifiedSamplingUtils
Return the per partition sampling function used for sampling with replacement.
getPoolForName(String) - 类 中的方法org.apache.spark.SparkContext
:: DeveloperApi :: Return the pool associated with the given name, if one exists
getPosition() - 类 中的方法org.apache.spark.streaming.kinesis.KinesisInitialPositions.AtTimestamp
 
getPosition() - 类 中的方法org.apache.spark.streaming.kinesis.KinesisInitialPositions.Latest
 
getPosition() - 类 中的方法org.apache.spark.streaming.kinesis.KinesisInitialPositions.TrimHorizon
 
getPowerIterationClustering(int, String, int, String, String, String) - 类 中的静态方法org.apache.spark.ml.r.PowerIterationClusteringWrapper
 
getPredictionCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasPredictionCol
 
getPreferredLocations(Partition) - 类 中的方法org.apache.spark.rdd.HadoopRDD
 
getPreferredLocations(Partition) - 类 中的方法org.apache.spark.rdd.NewHadoopRDD
 
getPreferredLocations(Partition) - 类 中的方法org.apache.spark.rdd.UnionRDD
 
getPrefixSpan(double, int, double, String) - 类 中的静态方法org.apache.spark.ml.r.PrefixSpanWrapper
 
getPrimitiveNullWritableConstantObjectInspector() - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getProbabilityCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasProbabilityCol
 
getProcessId() - 类 中的静态方法org.apache.spark.util.Utils
Returns the pid of this JVM process.
getProcessName() - 类 中的静态方法org.apache.spark.util.Utils
Returns the name of this JVM process.
getPropertiesFromFile(String) - 类 中的静态方法org.apache.spark.util.Utils
Load properties present in the given file.
getPythonRunnerConfMap(SQLConf) - 类 中的静态方法org.apache.spark.sql.util.ArrowUtils
Return Map with conf settings to be used in ArrowPythonRunner
getQuantileCalculationStrategy() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getQuantileProbabilities() - 接口 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionParams
 
getQuantilesCol() - 接口 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionParams
 
getRandomSample(Seq<T>, int, Random) - 类 中的静态方法org.apache.spark.storage.BlockReplicationUtils
Get a random sample of size m from the elems
getRank() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
 
getRatingCol() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
 
getRawPredictionCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasRawPredictionCol
 
getRDDStorageInfo() - 类 中的方法org.apache.spark.SparkContext
:: DeveloperApi :: Return information about what RDDs are cached, if they are in mem or on disk, how much space they take, etc.
getReceiver() - 类 中的方法org.apache.spark.streaming.dstream.ReceiverInputDStream
Gets the receiver object that will be sent to the worker nodes to receive data.
getRegParam() - 接口 中的方法org.apache.spark.ml.param.shared.HasRegParam
 
getRelativeError() - 接口 中的方法org.apache.spark.ml.param.shared.HasRelativeError
 
getRemoteUser() - 类 中的方法org.apache.spark.ui.XssSafeRequest
 
getResource(String) - 类 中的方法org.apache.spark.util.ChildFirstURLClassLoader
 
getResources(String) - 类 中的方法org.apache.spark.util.ChildFirstURLClassLoader
 
getRollingIntervalSecs(SparkConf, boolean) - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
getRootDirectory() - 类 中的静态方法org.apache.spark.SparkFiles
Get the root directory that contains files added through SparkContext.addFile().
getRow(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarBatch
Returns the row in this batch at `rowId`.
getScalingVec() - 类 中的方法org.apache.spark.ml.feature.ElementwiseProduct
 
getSchedulableByName(String) - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
getSchedulingMode() - 类 中的方法org.apache.spark.SparkContext
Return current scheduling mode
getSchemaQuery(String) - 类 中的方法org.apache.spark.sql.jdbc.AggregatedDialect
 
getSchemaQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
getSchemaQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
getSchemaQuery(String) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
The SQL query that should be used to discover the schema of a table.
getSchemaQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
getSchemaQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
getSchemaQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
getSchemaQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
getSchemaQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
getSchemaQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
getSeed() - 接口 中的方法org.apache.spark.ml.param.shared.HasSeed
 
getSeed() - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Gets the random seed.
getSeed() - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Return the random seed
getSeed() - 类 中的方法org.apache.spark.mllib.clustering.KMeans
The random seed for cluster initialization.
getSeed() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Random seed for cluster initialization.
getSeed() - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
Random seed for cluster initialization.
getSelectorType() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
 
getSeq(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of array type as a Scala Seq.
getSeqOp(boolean, Map<K, Object>, org.apache.spark.util.random.StratifiedSamplingUtils.RandomDataGenerator, Option<Map<K, Object>>) - 类 中的静态方法org.apache.spark.util.random.StratifiedSamplingUtils
Returns the function used by aggregate to collect sampling statistics for each partition.
getSequenceCol() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
getSessionConf(SparkSession) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
getShort(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i as a primitive short.
getShort(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getShort(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getShort(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getShort(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the short type value for rowId.
getShorts(int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Gets short type values from [rowId, rowId + count).
getShortWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getShortWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getSimpleMessage() - 异常错误 中的方法org.apache.spark.sql.AnalysisException
 
getSimpleName(Class<?>) - 类 中的静态方法org.apache.spark.util.Utils
Safer than Class obj's getSimpleName which may throw Malformed class name error in scala.
getSize() - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
group getParam
getSizeAsBytes(String) - 类 中的方法org.apache.spark.SparkConf
Get a size parameter as bytes; throws a NoSuchElementException if it's not set.
getSizeAsBytes(String, String) - 类 中的方法org.apache.spark.SparkConf
Get a size parameter as bytes, falling back to a default if not set.
getSizeAsBytes(String, long) - 类 中的方法org.apache.spark.SparkConf
Get a size parameter as bytes, falling back to a default if not set.
getSizeAsGb(String) - 类 中的方法org.apache.spark.SparkConf
Get a size parameter as Gibibytes; throws a NoSuchElementException if it's not set.
getSizeAsGb(String, String) - 类 中的方法org.apache.spark.SparkConf
Get a size parameter as Gibibytes, falling back to a default if not set.
getSizeAsKb(String) - 类 中的方法org.apache.spark.SparkConf
Get a size parameter as Kibibytes; throws a NoSuchElementException if it's not set.
getSizeAsKb(String, String) - 类 中的方法org.apache.spark.SparkConf
Get a size parameter as Kibibytes, falling back to a default if not set.
getSizeAsMb(String) - 类 中的方法org.apache.spark.SparkConf
Get a size parameter as Mebibytes; throws a NoSuchElementException if it's not set.
getSizeAsMb(String, String) - 类 中的方法org.apache.spark.SparkConf
Get a size parameter as Mebibytes, falling back to a default if not set.
getSizeForBlock(int) - 接口 中的方法org.apache.spark.scheduler.MapStatus
Estimated size for the reduce block, in bytes.
getSizeInBytes() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Gets the current size in bytes of this `Matrix`.
getSlotDescs() - 类 中的方法org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
 
getSmoothing() - 接口 中的方法org.apache.spark.ml.classification.NaiveBayesParams
 
getSolver() - 接口 中的方法org.apache.spark.ml.param.shared.HasSolver
 
getSortedTaskSetQueue() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
getSparkClassLoader() - 类 中的静态方法org.apache.spark.util.Utils
Get the ClassLoader which loaded Spark.
getSparkHome() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get Spark's home location from either a value set through the constructor, or the spark.home Java property, or the SPARK_HOME environment variable (in that order of preference).
getSparkOrYarnConfig(SparkConf, String, String) - 类 中的静态方法org.apache.spark.util.Utils
Return the value of a config either through the SparkConf or the Hadoop configuration.
getSparseSizeInBytes(boolean) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Gets the size of the minimal sparse representation of this `Matrix`.
getSplit() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
 
getSplits() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
getSplitsArray() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
getSrcCol() - 接口 中的方法org.apache.spark.ml.clustering.PowerIterationClusteringParams
 
getStageInfo(int) - 类 中的方法org.apache.spark.api.java.JavaSparkStatusTracker
Returns stage information, or null if the stage info could not be found or was garbage collected.
getStageInfo(int) - 类 中的方法org.apache.spark.SparkStatusTracker
Returns stage information, or None if the stage info could not be found or was garbage collected.
getStagePath(String, int, int, String) - 类 中的方法org.apache.spark.ml.Pipeline.SharedReadWrite$
Get path for saving the given stage.
getStages() - 类 中的方法org.apache.spark.ml.Pipeline
 
getStagingDir(Path, Configuration, String) - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
getStandardization() - 接口 中的方法org.apache.spark.ml.param.shared.HasStandardization
 
getStartOffset() - 类 中的静态方法org.apache.spark.rdd.InputFileBlockHolder
Returns the starting offset of the block currently being read, or -1 if it is unknown.
getStartTimeEpoch() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
getState() - 接口 中的方法org.apache.spark.launcher.SparkAppHandle
Returns the current application state.
getState() - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Return the associated Hive SessionState of this HiveClientImpl
getState() - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
:: DeveloperApi :: Return the current state of the context.
getState() - 类 中的方法org.apache.spark.streaming.StreamingContext
:: DeveloperApi :: Return the current state of the context.
getStatement() - 类 中的方法org.apache.spark.ml.feature.SQLTransformer
 
getStderr(Process, long) - 类 中的静态方法org.apache.spark.util.Utils
Return the stderr of a process after waiting for the process to terminate.
getStepSize() - 接口 中的方法org.apache.spark.ml.param.shared.HasStepSize
 
getStopWords() - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
getStorageLevel() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
getStorageLevel() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
getStorageLevel() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
getStorageLevel() - 类 中的方法org.apache.spark.rdd.RDD
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
GetStorageStatus$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.GetStorageStatus$
 
getStrategy() - 接口 中的方法org.apache.spark.ml.feature.ImputerParams
 
getString(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i as a String object.
getString(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a String.
getStringArray(String) - 类 中的方法org.apache.spark.sql.types.Metadata
Gets a String array.
getStringIndexerOrderType() - 接口 中的方法org.apache.spark.ml.feature.RFormulaBase
 
getStringOrderType() - 接口 中的方法org.apache.spark.ml.feature.StringIndexerBase
 
getStringWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getStringWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getStruct(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of struct type as a Row object.
getStruct(int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getStruct(int, int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getStruct(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the struct type value for rowId.
getSubsamplingRate() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getSubsamplingRate() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleParams
 
getSubsamplingRate() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getSystemProperties() - 类 中的静态方法org.apache.spark.util.Utils
Returns the system properties map that is thread-safe to iterator over.
getTable(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Get the table or view with the specified name.
getTable(String, String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Get the table or view with the specified name in the specified database.
getTable(CaseInsensitiveStringMap) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableProvider
Return a Table instance to do read/write with user-specified options.
getTable(CaseInsensitiveStringMap, StructType) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableProvider
Return a Table instance to do read/write with user-specified schema and options.
getTable(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the specified table, or throws `NoSuchTableException`.
getTableExistsQuery(String) - 类 中的方法org.apache.spark.sql.jdbc.AggregatedDialect
 
getTableExistsQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
getTableExistsQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
getTableExistsQuery(String) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
Get the SQL query that should be used to find if the given table exists.
getTableExistsQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
getTableExistsQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
getTableExistsQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
getTableExistsQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
getTableExistsQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
getTableExistsQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
getTableNames(SparkSession, String) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
getTableOption(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the metadata for the specified table or None if it doesn't exist.
getTables(SparkSession, String) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
getTablesByName(String, Seq<String>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns metadata of existing permanent tables/views for given names.
getTaskInfos() - 类 中的方法org.apache.spark.BarrierTaskContext
:: Experimental :: Returns BarrierTaskInfo for all tasks in this barrier stage, ordered by partition ID.
getTau0() - 类 中的方法org.apache.spark.mllib.clustering.OnlineLDAOptimizer
A (positive) learning parameter that downweights early iterations.
getThreadDump() - 类 中的静态方法org.apache.spark.util.Utils
Return a thread dump of all threads' stacktraces.
getThreadDumpForThread(long) - 类 中的静态方法org.apache.spark.util.Utils
 
getThreshold() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
getThreshold() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
getThreshold() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
Get threshold for binary classification.
getThreshold() - 接口 中的方法org.apache.spark.ml.param.shared.HasThreshold
 
getThreshold() - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionModel
Returns the threshold (if any) used for converting raw prediction scores into 0/1 predictions.
getThreshold() - 类 中的方法org.apache.spark.mllib.classification.SVMModel
Returns the threshold (if any) used for converting raw prediction scores into 0/1 predictions.
getThresholds() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
getThresholds() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
getThresholds() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
Get thresholds for binary or multiclass classification.
getThresholds() - 接口 中的方法org.apache.spark.ml.param.shared.HasThresholds
 
getThroughOrigin() - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
getTimeAsMs(String) - 类 中的方法org.apache.spark.SparkConf
Get a time parameter as milliseconds; throws a NoSuchElementException if it's not set.
getTimeAsMs(String, String) - 类 中的方法org.apache.spark.SparkConf
Get a time parameter as milliseconds, falling back to a default if not set.
getTimeAsSeconds(String) - 类 中的方法org.apache.spark.SparkConf
Get a time parameter as seconds; throws a NoSuchElementException if it's not set.
getTimeAsSeconds(String, String) - 类 中的方法org.apache.spark.SparkConf
Get a time parameter as seconds, falling back to a default if not set.
getTimeMillis() - 接口 中的方法org.apache.spark.util.Clock
 
getTimer(L) - 接口 中的方法org.apache.spark.util.ListenerBus
Returns a CodaHale metrics Timer for measuring the listener's event processing time.
getTimestamp(int) - 接口 中的方法org.apache.spark.sql.Row
Returns the value at position i of date type as java.sql.Timestamp.
getTimestamp() - 类 中的方法org.apache.spark.streaming.kinesis.KinesisInitialPositions.AtTimestamp
 
getTimestampWritable(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getTimestampWritableConstantObjectInspector(Object) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
getTimeZoneOffset() - 类 中的静态方法org.apache.spark.ui.UIUtils
 
GETTING_RESULT_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
GETTING_RESULT_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.TaskDetailsClassNames
 
GETTING_RESULT_TIME() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
gettingResult() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
gettingResultTime() - 类 中的方法org.apache.spark.scheduler.TaskInfo
The time when the task started remotely getting the result.
gettingResultTime() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
gettingResultTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
gettingResultTime(TaskData) - 类 中的静态方法org.apache.spark.status.AppStatusUtils
 
gettingResultTime(long, long, long) - 类 中的静态方法org.apache.spark.status.AppStatusUtils
 
getTokenJaasParams(KafkaTokenClusterConf) - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenUtil
 
getTol() - 接口 中的方法org.apache.spark.ml.param.shared.HasTol
 
getToLowercase() - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
getTopicConcentration() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getTopicConcentration() - 类 中的方法org.apache.spark.mllib.clustering.LDA
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
getTopicDistributionCol() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
 
getTopologyForHost(String) - 类 中的方法org.apache.spark.storage.DefaultTopologyMapper
 
getTopologyForHost(String) - 类 中的方法org.apache.spark.storage.FileBasedTopologyMapper
 
getTopologyForHost(String) - 类 中的方法org.apache.spark.storage.TopologyMapper
Gets the topology information given the host name
getTrainRatio() - 接口 中的方法org.apache.spark.ml.tuning.TrainValidationSplitParams
 
getTreeStrategy() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getTruncateQuery(String, Option<Object>) - 类 中的方法org.apache.spark.sql.jdbc.AggregatedDialect
The SQL query used to truncate a table.
getTruncateQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
getTruncateQuery(String, Option<Object>) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
getTruncateQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
getTruncateQuery(String, Option<Object>) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
getTruncateQuery(String) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
The SQL query that should be used to truncate a table.
getTruncateQuery(String, Option<Object>) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
The SQL query that should be used to truncate a table.
getTruncateQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
getTruncateQuery(String, Option<Object>) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
getTruncateQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
getTruncateQuery(String, Option<Object>) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
getTruncateQuery(String) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
getTruncateQuery(String, Option<Object>) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
getTruncateQuery(String, Option<Object>) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
The SQL query used to truncate a table.
getTruncateQuery(String, Option<Object>) - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
The SQL query used to truncate a table.
getTruncateQuery(String, Option<Object>) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
The SQL query used to truncate a table.
getTruncateQuery$default$2() - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
getTruncateQuery$default$2() - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
getTruncateQuery$default$2() - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
getTruncateQuery$default$2() - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
getTruncateQuery$default$2() - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
getTruncateQuery$default$2() - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
getTruncateQuery$default$2() - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
getTruncateQuery$default$2() - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
getUDTFor(String) - 类 中的静态方法org.apache.spark.sql.types.UDTRegistration
Returns the Class of UserDefinedType for the name of a given user class.
getUidMap(Params) - 类 中的静态方法org.apache.spark.ml.util.MetaAlgorithmReadWrite
Examine the given estimator (which may be a compound estimator) and extract a mapping from UIDs to corresponding Params instances.
getUiRoot(ServletContext) - 类 中的静态方法org.apache.spark.status.api.v1.UIRootFromServletContext
 
getUpper() - 接口 中的方法org.apache.spark.ml.feature.RobustScalerParams
 
getUpperBound(double, long, double) - 类 中的静态方法org.apache.spark.util.random.BinomialBounds
Returns a threshold p such that if we conduct n Bernoulli trials with success rate = p, it is very unlikely to have less than fraction * n successes.
getUpperBound(double) - 类 中的静态方法org.apache.spark.util.random.PoissonBounds
Returns a lambda such that Pr[X < s] is very small, where X ~ Pois(lambda).
getUpperBoundsOnCoefficients() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
 
getUpperBoundsOnIntercepts() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
 
getUsedTimeNs(long) - 类 中的静态方法org.apache.spark.util.Utils
Return the string to tell how long has passed in milliseconds.
getUseNodeIdCache() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
getUserCol() - 接口 中的方法org.apache.spark.ml.recommendation.ALSModelParams
 
getUserJars(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
Return the jar files pointed by the "spark.jars" property.
getUTF8String(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
getUTF8String(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
getUTF8String(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
getUTF8String(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the string type value for rowId.
getValidationIndicatorCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasValidationIndicatorCol
 
getValidationTol() - 接口 中的方法org.apache.spark.ml.tree.GBTParams
 
getValidationTol() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getValue(int) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Gets a value given its index.
getValue() - 类 中的方法org.apache.spark.mllib.stat.test.BinarySample
 
getValuesMap(Seq<String>) - 接口 中的方法org.apache.spark.sql.Row
Returns a Map consisting of names and values for the requested fieldNames For primitive types if value is null it returns 'zero value' specific for primitive ie. 0 for Int - use isNullAt to ensure that value is not null
getVarianceCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasVarianceCol
 
getVariancePower() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
 
getVectors() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
getVectors() - 类 中的方法org.apache.spark.mllib.feature.Word2VecModel
Returns a map of words to their vector representations.
getVectorSize() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
 
getVocabSize() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
 
getWeightCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasWeightCol
 
getWidth(Row) - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
Gets the width of the image
getWindowSize() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
 
getWithCentering() - 接口 中的方法org.apache.spark.ml.feature.RobustScalerParams
 
getWithMean() - 接口 中的方法org.apache.spark.ml.feature.StandardScalerParams
 
getWithScaling() - 接口 中的方法org.apache.spark.ml.feature.RobustScalerParams
 
getWithStd() - 接口 中的方法org.apache.spark.ml.feature.StandardScalerParams
 
getWritingCommand(SessionCatalog, CatalogTable, boolean) - 接口 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectBase
 
getWritingCommand(SessionCatalog, CatalogTable, boolean) - 类 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
getWritingCommand(SessionCatalog, CatalogTable, boolean) - 类 中的方法org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 
Gini - org.apache.spark.mllib.tree.impurity中的类
Class for calculating the Gini impurity (http://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity) during multiclass classification.
Gini() - 类 的构造器org.apache.spark.mllib.tree.impurity.Gini
 
GLMClassificationModel - org.apache.spark.mllib.classification.impl中的类
Helper class for import/export of GLM classification models.
GLMClassificationModel() - 类 的构造器org.apache.spark.mllib.classification.impl.GLMClassificationModel
 
GLMClassificationModel.SaveLoadV1_0$ - org.apache.spark.mllib.classification.impl中的类
 
GLMClassificationModel.SaveLoadV1_0$.Data - org.apache.spark.mllib.classification.impl中的类
Model data for import/export
GLMClassificationModel.SaveLoadV1_0$.Data$ - org.apache.spark.mllib.classification.impl中的类
 
GLMRegressionModel - org.apache.spark.mllib.regression.impl中的类
Helper methods for import/export of GLM regression models.
GLMRegressionModel() - 类 的构造器org.apache.spark.mllib.regression.impl.GLMRegressionModel
 
GLMRegressionModel.SaveLoadV1_0$ - org.apache.spark.mllib.regression.impl中的类
 
GLMRegressionModel.SaveLoadV1_0$.Data - org.apache.spark.mllib.regression.impl中的类
Model data for model import/export
GLMRegressionModel.SaveLoadV1_0$.Data$ - org.apache.spark.mllib.regression.impl中的类
 
glom() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an RDD created by coalescing all elements within each partition into an array.
glom() - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD created by coalescing all elements within each partition into an array.
glom() - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying glom() to each RDD of this DStream.
glom() - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD is generated by applying glom() to each RDD of this DStream.
goButtonFormPath() - 接口 中的方法org.apache.spark.ui.PagedTable
Returns the submission path for the "go to page #" form.
goodnessOfFit() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
 
GPU() - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
grad(DenseMatrix<Object>, DenseMatrix<Object>, DenseVector<Object>) - 接口 中的方法org.apache.spark.ml.ann.LayerModel
Computes the gradient.
grad() - 类 中的方法org.apache.spark.mllib.optimization.NNLS.Workspace
 
gradient() - 接口 中的方法org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
The current weighted averaged gradient.
gradient() - 类 中的方法org.apache.spark.ml.regression.AFTAggregator
 
Gradient - org.apache.spark.mllib.optimization中的类
:: DeveloperApi :: Class used to compute the gradient for a loss function, given a single data point.
Gradient() - 类 的构造器org.apache.spark.mllib.optimization.Gradient
 
gradient(double, double) - 类 中的静态方法org.apache.spark.mllib.tree.loss.AbsoluteError
Method to calculate the gradients for the gradient boosting calculation for least absolute error calculation.
gradient(double, double) - 类 中的静态方法org.apache.spark.mllib.tree.loss.LogLoss
Method to calculate the loss gradients for the gradient boosting calculation for binary classification The gradient with respect to F(x) is: - 4 y / (1 + exp(2 y F(x)))
gradient(double, double) - 接口 中的方法org.apache.spark.mllib.tree.loss.Loss
Method to calculate the gradients for the gradient boosting calculation.
gradient(double, double) - 类 中的静态方法org.apache.spark.mllib.tree.loss.SquaredError
Method to calculate the gradients for the gradient boosting calculation for least squares error calculation.
GradientBoostedTrees - org.apache.spark.ml.tree.impl中的类
 
GradientBoostedTrees() - 类 的构造器org.apache.spark.ml.tree.impl.GradientBoostedTrees
 
GradientBoostedTrees - org.apache.spark.mllib.tree中的类
A class that implements Stochastic Gradient Boosting for regression and binary classification.
GradientBoostedTrees(BoostingStrategy) - 类 的构造器org.apache.spark.mllib.tree.GradientBoostedTrees
 
GradientBoostedTreesModel - org.apache.spark.mllib.tree.model中的类
Represents a gradient boosted trees model.
GradientBoostedTreesModel(Enumeration.Value, DecisionTreeModel[], double[]) - 类 的构造器org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
 
GradientDescent - org.apache.spark.mllib.optimization中的类
Class used to solve an optimization problem using Gradient Descent.
gradientSumArray() - 接口 中的方法org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
Array of gradient values that are mutated when new instances are added to the aggregator.
Graph<VD,ED> - org.apache.spark.graphx中的类
The Graph abstractly represents a graph with arbitrary objects associated with vertices and edges.
GraphGenerators - org.apache.spark.graphx.util中的类
A collection of graph generating functions.
GraphGenerators() - 类 的构造器org.apache.spark.graphx.util.GraphGenerators
 
GraphImpl<VD,ED> - org.apache.spark.graphx.impl中的类
An implementation of Graph to support computation on graphs.
GraphLoader - org.apache.spark.graphx中的类
Provides utilities for loading Graphs from files.
GraphLoader() - 类 的构造器org.apache.spark.graphx.GraphLoader
 
GraphOps<VD,ED> - org.apache.spark.graphx中的类
Contains additional functionality for Graph.
GraphOps(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - 类 的构造器org.apache.spark.graphx.GraphOps
 
graphToGraphOps(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.Graph
Implicitly extracts the GraphOps member from a graph.
GraphXUtils - org.apache.spark.graphx中的类
 
GraphXUtils() - 类 的构造器org.apache.spark.graphx.GraphXUtils
 
greater(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
greater(Time) - 类 中的方法org.apache.spark.streaming.Time
 
greaterEq(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
greaterEq(Time) - 类 中的方法org.apache.spark.streaming.Time
 
GreaterThan - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to a value greater than value.
GreaterThan(String, Object) - 类 的构造器org.apache.spark.sql.sources.GreaterThan
 
GreaterThanOrEqual - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to a value greater than or equal to value.
GreaterThanOrEqual(String, Object) - 类 的构造器org.apache.spark.sql.sources.GreaterThanOrEqual
 
greatest(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Returns the greatest value of the list of values, skipping null values.
greatest(String, String...) - 类 中的静态方法org.apache.spark.sql.functions
Returns the greatest value of the list of column names, skipping null values.
greatest(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns the greatest value of the list of values, skipping null values.
greatest(String, Seq<String>) - 类 中的静态方法org.apache.spark.sql.functions
Returns the greatest value of the list of column names, skipping null values.
gridGraph(SparkContext, int, int) - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
Create rows by cols grid graph with each vertex connected to its row+1 and col+1 neighbors.
groupArr() - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
groupBy(Function<T, U>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an RDD of grouped elements.
groupBy(Function<T, U>, int) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an RDD of grouped elements.
groupBy(Function1<T, K>, ClassTag<K>) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD of grouped items.
groupBy(Function1<T, K>, int, ClassTag<K>) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD of grouped elements.
groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD of grouped items.
groupBy(Column...) - 类 中的方法org.apache.spark.sql.Dataset
Groups the Dataset using the specified columns, so we can run aggregation on them.
groupBy(String, String...) - 类 中的方法org.apache.spark.sql.Dataset
Groups the Dataset using the specified columns, so that we can run aggregation on them.
groupBy(Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Groups the Dataset using the specified columns, so we can run aggregation on them.
groupBy(String, Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Groups the Dataset using the specified columns, so that we can run aggregation on them.
groupByKey(Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Group the values for each key in the RDD into a single sequence.
groupByKey(int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Group the values for each key in the RDD into a single sequence.
groupByKey() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Group the values for each key in the RDD into a single sequence.
groupByKey(Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Group the values for each key in the RDD into a single sequence.
groupByKey(int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Group the values for each key in the RDD into a single sequence.
groupByKey() - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Group the values for each key in the RDD into a single sequence.
groupByKey(Function1<T, K>, Encoder<K>) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Returns a KeyValueGroupedDataset where the data is grouped by the given key func.
groupByKey(MapFunction<T, K>, Encoder<K>) - 类 中的方法org.apache.spark.sql.Dataset
(Java-specific) Returns a KeyValueGroupedDataset where the data is grouped by the given key func.
groupByKey() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey to each RDD.
groupByKey(int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey to each RDD.
groupByKey(Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey on each RDD of this DStream.
groupByKey() - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey to each RDD.
groupByKey(int) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey to each RDD.
groupByKey(Partitioner) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey on each RDD.
groupByKeyAndWindow(Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey over a sliding window.
groupByKeyAndWindow(Duration, Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey over a sliding window.
groupByKeyAndWindow(Duration, Duration, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey over a sliding window on this DStream.
groupByKeyAndWindow(Duration, Duration, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey over a sliding window on this DStream.
groupByKeyAndWindow(Duration) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey over a sliding window.
groupByKeyAndWindow(Duration, Duration) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey over a sliding window.
groupByKeyAndWindow(Duration, Duration, int) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey over a sliding window on this DStream.
groupByKeyAndWindow(Duration, Duration, Partitioner) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Create a new DStream by applying groupByKey over a sliding window on this DStream.
GroupByType$() - 类 的构造器org.apache.spark.sql.RelationalGroupedDataset.GroupByType$
 
groupEdges(Function2<ED, ED, ED>) - 类 中的方法org.apache.spark.graphx.Graph
Merges multiple edges between two vertices into a single edge.
groupEdges(Function2<ED, ED, ED>) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
groupHash() - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
grouping(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: indicates whether a specified column in a GROUP BY list is aggregated or not, returns 1 for aggregated or 0 for not aggregated in the result set.
grouping(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: indicates whether a specified column in a GROUP BY list is aggregated or not, returns 1 for aggregated or 0 for not aggregated in the result set.
grouping_id(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the level of grouping, equals to (grouping(c1) <<; (n-1)) + (grouping(c2) <<; (n-2)) + ... + grouping(cn)
grouping_id(String, Seq<String>) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the level of grouping, equals to (grouping(c1) <<; (n-1)) + (grouping(c2) <<; (n-2)) + ... + grouping(cn)
GroupMappingServiceProvider - org.apache.spark.security中的接口
This Spark trait is used for mapping a given userName to a set of groups which it belongs to.
GroupState<S> - org.apache.spark.sql.streaming中的接口
:: Experimental :: Wrapper class for interacting with per-group state data in mapGroupsWithState and flatMapGroupsWithState operations on KeyValueGroupedDataset.
GroupStateTimeout - org.apache.spark.sql.streaming中的类
Represents the type of timeouts possible for the Dataset operations `mapGroupsWithState` and `flatMapGroupsWithState`.
GroupStateTimeout() - 类 的构造器org.apache.spark.sql.streaming.GroupStateTimeout
 
groupWith(JavaPairRDD<K, W>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Alias for cogroup.
groupWith(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Alias for cogroup.
groupWith(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Alias for cogroup.
groupWith(RDD<Tuple2<K, W>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Alias for cogroup.
groupWith(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Alias for cogroup.
groupWith(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Alias for cogroup.
gt(double) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Check if value is greater than lowerBound
gt(Object) - 类 中的方法org.apache.spark.sql.Column
Greater than.
gt(T, T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
gt(T, T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
gt(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
gt(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
gt(T, T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
gt(T, T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
gt(T, T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
gtEq(double) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Check if value is greater than or equal to lowerBound
gteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
gteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
gteq(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
gteq(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
gteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
gteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
gteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
guard(Function0<Parsers.Parser<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 

H

hadoopConfiguration() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Returns the Hadoop configuration used for the Hadoop code (e.g. file systems) we reuse.
hadoopConfiguration() - 类 中的方法org.apache.spark.SparkContext
A default Hadoop Configuration for the Hadoop code (e.g. file systems) that we reuse.
hadoopDelegationCreds() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
 
HadoopDelegationTokenProvider - org.apache.spark.security中的接口
::DeveloperApi:: Hadoop delegation token provider.
hadoopFile(String, Class<F>, Class<K>, Class<V>, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop file with an arbitrary InputFormat.
hadoopFile(String, Class<F>, Class<K>, Class<V>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop file with an arbitrary InputFormat
hadoopFile(String, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - 类 中的方法org.apache.spark.SparkContext
Get an RDD for a Hadoop file with an arbitrary InputFormat
hadoopFile(String, int, ClassTag<K>, ClassTag<V>, ClassTag<F>) - 类 中的方法org.apache.spark.SparkContext
Smarter version of hadoopFile() that uses class tags to figure out the classes of keys, values and the InputFormat so that users don't need to pass them directly.
hadoopFile(String, ClassTag<K>, ClassTag<V>, ClassTag<F>) - 类 中的方法org.apache.spark.SparkContext
Smarter version of hadoopFile() that uses class tags to figure out the classes of keys, values and the InputFormat so that users don't need to pass them directly.
HadoopMapPartitionsWithSplitRDD$() - 类 的构造器org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD$
 
HadoopMapRedCommitProtocol - org.apache.spark.internal.io中的类
An FileCommitProtocol implementation backed by an underlying Hadoop OutputCommitter (from the old mapred API).
HadoopMapRedCommitProtocol(String, String) - 类 的构造器org.apache.spark.internal.io.HadoopMapRedCommitProtocol
 
HadoopMapReduceCommitProtocol - org.apache.spark.internal.io中的类
An FileCommitProtocol implementation backed by an underlying Hadoop OutputCommitter (from the newer mapreduce API, not the old mapred API).
HadoopMapReduceCommitProtocol(String, String, boolean) - 类 的构造器org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
 
hadoopProperties() - 类 中的方法org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
 
hadoopRDD(JobConf, Class<F>, Class<K>, Class<V>, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable, etc).
hadoopRDD(JobConf, Class<F>, Class<K>, Class<V>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable,
HadoopRDD<K,V> - org.apache.spark.rdd中的类
:: DeveloperApi :: An RDD that provides core functionality for reading data stored in Hadoop (e.g., files in HDFS, sources in HBase, or S3), using the older MapReduce API (org.apache.hadoop.mapred).
HadoopRDD(SparkContext, Broadcast<SerializableConfiguration>, Option<Function1<JobConf, BoxedUnit>>, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - 类 的构造器org.apache.spark.rdd.HadoopRDD
 
HadoopRDD(SparkContext, JobConf, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - 类 的构造器org.apache.spark.rdd.HadoopRDD
 
hadoopRDD(JobConf, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - 类 中的方法org.apache.spark.SparkContext
Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf given its InputFormat and other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable), using the older MapReduce API (org.apache.hadoop.mapred).
HadoopRDD.HadoopMapPartitionsWithSplitRDD$ - org.apache.spark.rdd中的类
 
HadoopWriteConfigUtil<K,V> - org.apache.spark.internal.io中的类
Interface for create output format/committer/writer used during saving an RDD using a Hadoop OutputFormat (both from the old mapred API and the new mapreduce API) Notes: 1.
HadoopWriteConfigUtil(ClassTag<V>) - 类 的构造器org.apache.spark.internal.io.HadoopWriteConfigUtil
 
hammingLoss() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns Hamming-loss
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
Param for how to handle invalid entries containing NaN values.
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
handleInvalid() - 接口 中的方法org.apache.spark.ml.feature.OneHotEncoderBase
Param for how to handle invalid data during transform().
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
handleInvalid() - 接口 中的方法org.apache.spark.ml.feature.QuantileDiscretizerBase
Param for how to handle invalid entries.
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.RFormula
 
handleInvalid() - 接口 中的方法org.apache.spark.ml.feature.RFormulaBase
Param for how to handle invalid data (unseen or NULL values) in features and label column of string type.
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
handleInvalid() - 接口 中的方法org.apache.spark.ml.feature.StringIndexerBase
Param for how to handle invalid data (unseen labels or NULL values).
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
Param for how to handle invalid data (NULL values).
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
handleInvalid() - 接口 中的方法org.apache.spark.ml.feature.VectorIndexerParams
Param for how to handle invalid data (unseen labels or NULL values).
handleInvalid() - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
Param for how to handle invalid entries.
handleInvalid() - 接口 中的方法org.apache.spark.ml.param.shared.HasHandleInvalid
Param for how to handle invalid entries.
hasAccumulators(StageData) - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HasAggregationDepth - org.apache.spark.ml.param.shared中的接口
Trait for shared param aggregationDepth (default: 2).
hasAttr(String) - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Test whether this attribute group contains a specific attribute.
hasBytesSpilled(StageData) - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
hasCachedSerializedBroadcast() - 类 中的方法org.apache.spark.ShuffleStatus
 
HasCheckpointInterval - org.apache.spark.ml.param.shared中的接口
Trait for shared param checkpointInterval.
HasCollectSubModels - org.apache.spark.ml.param.shared中的接口
Trait for shared param collectSubModels (default: false).
hasDefault(Param<T>) - 接口 中的方法org.apache.spark.ml.param.Params
Tests whether the input param has a default value set.
HasDistanceMeasure - org.apache.spark.ml.param.shared中的接口
Trait for shared param distanceMeasure (default: "euclidean").
HasElasticNetParam - org.apache.spark.ml.param.shared中的接口
Trait for shared param elasticNetParam.
HasFeaturesCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param featuresCol (default: "features").
HasFitIntercept - org.apache.spark.ml.param.shared中的接口
Trait for shared param fitIntercept (default: true).
hash(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Calculates the hash code of given columns, and returns the result as an int column.
hash(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Calculates the hash code of given columns, and returns the result as an int column.
HasHandleInvalid - org.apache.spark.ml.param.shared中的接口
Trait for shared param handleInvalid.
hashCode() - 类 中的方法org.apache.spark.api.java.Optional
 
hashCode() - 类 中的方法org.apache.spark.graphx.EdgeDirection
 
hashCode() - 类 中的方法org.apache.spark.HashPartitioner
 
hashCode() - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
 
hashCode() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
hashCode() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
hashCode() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
hashCode() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
hashCode() - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
hashCode() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
hashCode() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
hashCode() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Returns a hash code value for the vector.
hashCode() - 类 中的方法org.apache.spark.ml.param.Param
 
hashCode() - 类 中的方法org.apache.spark.ml.tree.CategoricalSplit
 
hashCode() - 类 中的方法org.apache.spark.ml.tree.ContinuousSplit
 
hashCode() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
hashCode() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
hashCode() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
hashCode() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
hashCode() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Returns a hash code value for the vector.
hashCode() - 类 中的方法org.apache.spark.mllib.linalg.VectorUDT
 
hashCode() - 类 中的方法org.apache.spark.mllib.tree.model.InformationGainStats
 
hashCode() - 类 中的方法org.apache.spark.mllib.tree.model.Predict
 
hashCode() - 类 中的方法org.apache.spark.partial.BoundedDouble
 
hashCode() - 接口 中的方法org.apache.spark.Partition
 
hashCode() - 类 中的方法org.apache.spark.RangePartitioner
 
hashCode() - 类 中的方法org.apache.spark.resource.ResourceInformation
 
hashCode() - 类 中的方法org.apache.spark.scheduler.cluster.ExecutorInfo
 
hashCode() - 类 中的方法org.apache.spark.scheduler.InputFormatInfo
 
hashCode() - 类 中的方法org.apache.spark.scheduler.SplitInfo
 
hashCode() - 类 中的方法org.apache.spark.sql.Column
 
hashCode() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.AddColumn
 
hashCode() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.DeleteColumn
 
hashCode() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.RemoveProperty
 
hashCode() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.RenameColumn
 
hashCode() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.SetProperty
 
hashCode() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnComment
 
hashCode() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType
 
hashCode() - 类 中的方法org.apache.spark.sql.connector.read.streaming.Offset
 
hashCode() - 接口 中的方法org.apache.spark.sql.Row
 
hashCode() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysFalse
 
hashCode() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysTrue
 
hashCode() - 类 中的方法org.apache.spark.sql.sources.In
 
hashCode() - 类 中的方法org.apache.spark.sql.types.Decimal
 
hashCode() - 类 中的方法org.apache.spark.sql.types.Metadata
 
hashCode() - 类 中的方法org.apache.spark.sql.types.StructType
 
hashCode() - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
hashCode() - 类 中的方法org.apache.spark.storage.BlockManagerId
 
hashCode() - 类 中的方法org.apache.spark.storage.StorageLevel
 
HashingTF - org.apache.spark.ml.feature中的类
Maps a sequence of terms to their term frequencies using the hashing trick.
HashingTF(String) - 类 的构造器org.apache.spark.ml.feature.HashingTF
 
HashingTF() - 类 的构造器org.apache.spark.ml.feature.HashingTF
 
HashingTF - org.apache.spark.mllib.feature中的类
Maps a sequence of terms to their term frequencies using the hashing trick.
HashingTF(int) - 类 的构造器org.apache.spark.mllib.feature.HashingTF
 
HashingTF() - 类 的构造器org.apache.spark.mllib.feature.HashingTF
 
HashPartitioner - org.apache.spark中的类
A Partitioner that implements hash-based partitioning using Java's Object.hashCode.
HashPartitioner(int) - 类 的构造器org.apache.spark.HashPartitioner
 
hasInput(StageData) - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HasInputCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param inputCol.
HasInputCols - org.apache.spark.ml.param.shared中的接口
Trait for shared param inputCols.
hasInputOutputFormat() - 类 中的方法org.apache.spark.sql.hive.execution.HiveOptions
 
hasLabelCol(StructType) - 接口 中的方法org.apache.spark.ml.feature.RFormulaBase
 
HasLabelCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param labelCol (default: "label").
hasLinkPredictionCol() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
Checks whether we should output link prediction.
HasLoss - org.apache.spark.ml.param.shared中的接口
Trait for shared param loss.
HasMaxIter - org.apache.spark.ml.param.shared中的接口
Trait for shared param maxIter.
hasMemoryInfo() - 类 中的方法org.apache.spark.status.LiveExecutor
 
hasNext() - 类 中的方法org.apache.spark.InterruptibleIterator
 
hasNull() - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
hasNull() - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns true if this column vector contains any null values.
HasNumFeatures - org.apache.spark.ml.param.shared中的接口
Trait for shared param numFeatures (default: 262144).
hasOffsetCol() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
Checks whether offset column is set and nonempty.
hasOutput(StageData) - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HasOutputCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param outputCol (default: uid + "__output").
HasOutputCols - org.apache.spark.ml.param.shared中的接口
Trait for shared param outputCols.
HasParallelism - org.apache.spark.ml.param.shared中的接口
Trait to define a level of parallelism for algorithms that are able to use multithreaded execution, and provide a thread-pool based execution context.
hasParam(String) - 接口 中的方法org.apache.spark.ml.param.Params
Tests whether this instance contains a param with a given name.
hasParent() - 类 中的方法org.apache.spark.ml.Model
Indicates whether this Model has a corresponding parent.
HasPredictionCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param predictionCol (default: "prediction").
HasProbabilityCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param probabilityCol (default: "probability").
hasQuantilesCol() - 接口 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionParams
Checks whether the input has quantiles column name.
HasRawPredictionCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param rawPredictionCol (default: "rawPrediction").
HasRegParam - org.apache.spark.ml.param.shared中的接口
Trait for shared param regParam.
HasRelativeError - org.apache.spark.ml.param.shared中的接口
Trait for shared param relativeError (default: 0.001).
hasRootAsShutdownDeleteDir(File) - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
 
HasSeed - org.apache.spark.ml.param.shared中的接口
Trait for shared param seed (default: this.getClass.getName.hashCode.toLong).
hasShuffleRead(StageData) - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
hasShuffleWrite(StageData) - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
hasShutdownDeleteDir(File) - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
 
HasSolver - org.apache.spark.ml.param.shared中的接口
Trait for shared param solver.
HasStandardization - org.apache.spark.ml.param.shared中的接口
Trait for shared param standardization (default: true).
HasStepSize - org.apache.spark.ml.param.shared中的接口
Trait for shared param stepSize.
hasSubModels() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
hasSubModels() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
hasSummary() - 接口 中的方法org.apache.spark.ml.util.HasTrainingSummary
Indicates whether a training summary exists for this model instance.
HasThreshold - org.apache.spark.ml.param.shared中的接口
Trait for shared param threshold.
HasThresholds - org.apache.spark.ml.param.shared中的接口
Trait for shared param thresholds.
hasTimedOut() - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Whether the function has been called because the key has timed out.
HasTol - org.apache.spark.ml.param.shared中的接口
Trait for shared param tol.
HasTrainingSummary<T> - org.apache.spark.ml.util中的接口
Trait for models that provides Training summary.
HasValidationIndicatorCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param validationIndicatorCol.
hasValue(String) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Tests whether this attribute contains a specific value.
HasVarianceCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param varianceCol.
HasVarianceImpurity - org.apache.spark.ml.tree中的接口
 
HasWeightCol - org.apache.spark.ml.param.shared中的接口
Trait for shared param weightCol.
hasWeightCol() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
Checks whether weight column is set and nonempty.
hasWeightCol() - 接口 中的方法org.apache.spark.ml.regression.IsotonicRegressionBase
Checks whether the input has weight column.
hasWriteObjectMethod() - 类 中的方法org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
 
hasWriteReplaceMethod() - 类 中的方法org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
 
HdfsUtils - org.apache.spark.streaming.util中的类
 
HdfsUtils() - 类 的构造器org.apache.spark.streaming.util.HdfsUtils
 
head(int) - 类 中的方法org.apache.spark.sql.Dataset
Returns the first n rows.
head() - 类 中的方法org.apache.spark.sql.Dataset
Returns the first row.
HEADER_ACCUMULATORS() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_ATTEMPT() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_DESER_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_DISK_SPILL() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_DURATION() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_ERROR() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_EXECUTOR() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_GC_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_GETTING_RESULT_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_HOST() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_ID() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_INPUT_SIZE() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_LAUNCH_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_LOCALITY() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_MEM_SPILL() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_OUTPUT_SIZE() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_PEAK_MEM() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_SCHEDULER_DELAY() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_SER_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_SHUFFLE_READ_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_SHUFFLE_REMOTE_READS() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_SHUFFLE_TOTAL_READS() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_SHUFFLE_WRITE_SIZE() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_SHUFFLE_WRITE_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_STATUS() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
HEADER_TASK_INDEX() - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
headers() - 接口 中的方法org.apache.spark.ui.PagedTable
 
headerSparkPage(HttpServletRequest, String, Function0<Seq<Node>>, SparkUITab, Option<String>, boolean, boolean) - 类 中的静态方法org.apache.spark.ui.UIUtils
Returns a spark page with correctly formatted headers
hex(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes hex value of the given column.
high() - 类 中的方法org.apache.spark.partial.BoundedDouble
 
HingeGradient - org.apache.spark.mllib.optimization中的类
:: DeveloperApi :: Compute gradient and loss for a Hinge loss function, as used in SVM binary classification.
HingeGradient() - 类 的构造器org.apache.spark.mllib.optimization.HingeGradient
 
hint(String, Object...) - 类 中的方法org.apache.spark.sql.Dataset
Specifies some hint on the current Dataset.
hint(String, Seq<Object>) - 类 中的方法org.apache.spark.sql.Dataset
Specifies some hint on the current Dataset.
histogram(int) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Compute a histogram of the data using bucketCount number of buckets evenly spaced between the minimum and maximum of the RDD.
histogram(double[]) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Compute a histogram using the provided buckets.
histogram(Double[], boolean) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
 
histogram(int) - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Compute a histogram of the data using bucketCount number of buckets evenly spaced between the minimum and maximum of the RDD.
histogram(double[], boolean) - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Compute a histogram using the provided buckets.
History - org.apache.spark.internal.config中的类
 
History() - 类 的构造器org.apache.spark.internal.config.History
 
HISTORY_LOG_DIR() - 类 中的静态方法org.apache.spark.internal.config.History
 
HISTORY_SERVER_UI_ACLS_ENABLE() - 类 中的静态方法org.apache.spark.internal.config.History
 
HISTORY_SERVER_UI_ADMIN_ACLS() - 类 中的静态方法org.apache.spark.internal.config.History
 
HISTORY_SERVER_UI_ADMIN_ACLS_GROUPS() - 类 中的静态方法org.apache.spark.internal.config.History
 
HISTORY_SERVER_UI_PORT() - 类 中的静态方法org.apache.spark.internal.config.History
 
HIVE_GENERIC_UDF_MACRO_CLS() - 类 中的静态方法org.apache.spark.sql.hive.HiveShim
 
HIVE_METASTORE_BARRIER_PREFIXES() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
HIVE_METASTORE_JARS() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
HIVE_METASTORE_SHARED_PREFIXES() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
HIVE_METASTORE_VERSION() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
HIVE_THRIFT_SERVER_ASYNC() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
HiveAnalysis - org.apache.spark.sql.hive中的类
Replaces generic operations with specific variants that are designed to work with Hive.
HiveAnalysis() - 类 的构造器org.apache.spark.sql.hive.HiveAnalysis
 
HiveCatalogMetrics - org.apache.spark.metrics.source中的类
Metrics for access to the hive external catalog.
HiveCatalogMetrics() - 类 的构造器org.apache.spark.metrics.source.HiveCatalogMetrics
 
HiveClient - org.apache.spark.sql.hive.client中的接口
An externally visible interface to the Hive client.
HiveFileFormat - org.apache.spark.sql.hive.execution中的类
FileFormat for writing Hive tables.
HiveFileFormat(org.apache.spark.sql.hive.HiveShim.ShimFileSinkDesc) - 类 的构造器org.apache.spark.sql.hive.execution.HiveFileFormat
 
HiveFileFormat() - 类 的构造器org.apache.spark.sql.hive.execution.HiveFileFormat
 
HiveFunctionWrapper$() - 类 的构造器org.apache.spark.sql.hive.HiveShim.HiveFunctionWrapper$
 
HiveInspectors - org.apache.spark.sql.hive中的接口
1.
HiveInspectors.typeInfoConversions - org.apache.spark.sql.hive中的类
 
HiveOptions - org.apache.spark.sql.hive.execution中的类
Options for the Hive data source.
HiveOptions(CaseInsensitiveMap<String>) - 类 的构造器org.apache.spark.sql.hive.execution.HiveOptions
 
HiveOptions(Map<String, String>) - 类 的构造器org.apache.spark.sql.hive.execution.HiveOptions
 
HiveOutputWriter - org.apache.spark.sql.hive.execution中的类
 
HiveOutputWriter(String, org.apache.spark.sql.hive.HiveShim.ShimFileSinkDesc, JobConf, StructType) - 类 的构造器org.apache.spark.sql.hive.execution.HiveOutputWriter
 
HiveScriptIOSchema - org.apache.spark.sql.hive.execution中的类
 
HiveScriptIOSchema(Seq<Tuple2<String, String>>, Seq<Tuple2<String, String>>, Option<String>, Option<String>, Seq<Tuple2<String, String>>, Seq<Tuple2<String, String>>, Option<String>, Option<String>, boolean) - 类 的构造器org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
HiveSessionResourceLoader - org.apache.spark.sql.hive中的类
 
HiveSessionResourceLoader(SparkSession, Function0<HiveClient>) - 类 的构造器org.apache.spark.sql.hive.HiveSessionResourceLoader
 
HiveSessionStateBuilder - org.apache.spark.sql.hive中的类
Builder that produces a Hive-aware SessionState.
HiveSessionStateBuilder(SparkSession, Option<SessionState>) - 类 的构造器org.apache.spark.sql.hive.HiveSessionStateBuilder
 
HiveShim - org.apache.spark.sql.hive中的类
 
HiveShim() - 类 的构造器org.apache.spark.sql.hive.HiveShim
 
HiveShim.HiveFunctionWrapper$ - org.apache.spark.sql.hive中的类
 
HiveStrategies - org.apache.spark.sql.hive中的接口
 
HiveStrategies.HiveTableScans - org.apache.spark.sql.hive中的类
Retrieves data using a HiveTableScan.
HiveStrategies.HiveTableScans$ - org.apache.spark.sql.hive中的类
Retrieves data using a HiveTableScan.
HiveStrategies.Scripts - org.apache.spark.sql.hive中的类
 
HiveStrategies.Scripts$ - org.apache.spark.sql.hive中的类
 
HiveStringType - org.apache.spark.sql.types中的类
A hive string type for compatibility.
HiveStringType() - 类 的构造器org.apache.spark.sql.types.HiveStringType
 
HiveTableScans() - 接口 中的方法org.apache.spark.sql.hive.HiveStrategies
 
HiveTableScans() - 类 的构造器org.apache.spark.sql.hive.HiveStrategies.HiveTableScans
 
HiveTableScans$() - 类 的构造器org.apache.spark.sql.hive.HiveStrategies.HiveTableScans$
 
HiveTableUtil - org.apache.spark.sql.hive中的类
 
HiveTableUtil() - 类 的构造器org.apache.spark.sql.hive.HiveTableUtil
 
HiveUDAFBuffer - org.apache.spark.sql.hive中的类
 
HiveUDAFBuffer(GenericUDAFEvaluator.AggregationBuffer, boolean) - 类 的构造器org.apache.spark.sql.hive.HiveUDAFBuffer
 
HiveUtils - org.apache.spark.sql.hive中的类
 
HiveUtils() - 类 的构造器org.apache.spark.sql.hive.HiveUtils
 
holdingLocks() - 类 中的方法org.apache.spark.status.api.v1.ThreadStackTrace
 
horzcat(Matrix[]) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Horizontally concatenate a sequence of matrices.
horzcat(Matrix[]) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Horizontally concatenate a sequence of matrices.
host() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
 
host() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
 
host() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
host() - 接口 中的方法org.apache.spark.scheduler.TaskLocation
 
host() - 接口 中的方法org.apache.spark.SparkExecutorInfo
 
host() - 类 中的方法org.apache.spark.SparkExecutorInfoImpl
 
host() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
host() - 类 中的方法org.apache.spark.status.LiveExecutor
 
HOST() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
host() - 类 中的方法org.apache.spark.storage.BlockManagerId
 
hostId() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeBlacklisted
 
hostId() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
 
hostId() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
 
hostLocation() - 类 中的方法org.apache.spark.scheduler.SplitInfo
 
hostname() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
 
hostname() - 类 中的方法org.apache.spark.status.LiveExecutor
 
hostPort() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
hostPort() - 类 中的方法org.apache.spark.status.LiveExecutor
 
hostPort() - 类 中的方法org.apache.spark.storage.BlockManagerId
 
hostToLocalTaskCount() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
 
hour(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the hours as an integer from a given date/timestamp/string.
hours() - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
hours(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Expressions
Create an hourly transform for a timestamp column.
hours(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
hours(Column) - 类 中的静态方法org.apache.spark.sql.functions
A transform for timestamps to partition data into hours.
html() - 类 中的方法org.apache.spark.status.api.v1.StackTrace
 
htmlResponderToServlet(Function1<HttpServletRequest, Seq<Node>>) - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
httpRequest() - 接口 中的方法org.apache.spark.status.api.v1.ApiRequestContext
 
httpResponseCode(URL, String, Seq<Tuple2<String, String>>) - 类 中的静态方法org.apache.spark.TestUtils
Returns the response code from an HTTP(S) URL.
HttpSecurityFilter - org.apache.spark.ui中的类
A servlet filter that implements HTTP security features.
HttpSecurityFilter(SparkConf, org.apache.spark.SecurityManager) - 类 的构造器org.apache.spark.ui.HttpSecurityFilter
 
hypot(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(String, Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(String, String) - 类 中的静态方法org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(Column, double) - 类 中的静态方法org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(String, double) - 类 中的静态方法org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(double, Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(double, String) - 类 中的静态方法org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.

I

i() - 类 中的方法org.apache.spark.mllib.linalg.distributed.MatrixEntry
 
id() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
A unique ID for this RDD (within its SparkContext).
id() - 类 中的方法org.apache.spark.broadcast.Broadcast
 
id() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
id() - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
 
id() - 类 中的方法org.apache.spark.mllib.tree.model.Node
 
id() - 类 中的方法org.apache.spark.rdd.RDD
A unique ID for this RDD (within its SparkContext).
id() - 类 中的方法org.apache.spark.scheduler.AccumulableInfo
 
id() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
id() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Returns the unique id of this query that persists across restarts from checkpoint data.
id() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
 
id() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
 
id() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
id() - 类 中的方法org.apache.spark.status.api.v1.AccumulableInfo
 
id() - 类 中的方法org.apache.spark.status.api.v1.ApplicationInfo
 
id() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
id() - 类 中的方法org.apache.spark.status.api.v1.RDDStorageInfo
 
id() - 类 中的方法org.apache.spark.storage.RDDInfo
 
id() - 类 中的方法org.apache.spark.streaming.dstream.InputDStream
This is a unique identifier for the input stream.
id() - 类 中的方法org.apache.spark.streaming.scheduler.OutputOperationInfo
 
id() - 类 中的方法org.apache.spark.util.AccumulatorV2
Returns the id of this accumulator, can only be called after registration.
Identifiable - org.apache.spark.ml.util中的接口
:: DeveloperApi :: Trait for an object with an immutable unique ID that identifies itself and its derivatives.
Identifier - org.apache.spark.sql.connector.catalog中的接口
Identifies an object in a catalog.
IdentifierHelper(Identifier) - 类 的构造器org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
 
identity(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Expressions
Create an identity transform for a column.
identity(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
Identity$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
 
IDF - org.apache.spark.ml.feature中的类
Compute the Inverse Document Frequency (IDF) given a collection of documents.
IDF(String) - 类 的构造器org.apache.spark.ml.feature.IDF
 
IDF() - 类 的构造器org.apache.spark.ml.feature.IDF
 
idf() - 类 中的方法org.apache.spark.ml.feature.IDFModel
Returns the IDF vector.
IDF - org.apache.spark.mllib.feature中的类
Inverse document frequency (IDF).
IDF(int) - 类 的构造器org.apache.spark.mllib.feature.IDF
 
IDF() - 类 的构造器org.apache.spark.mllib.feature.IDF
 
idf() - 类 中的方法org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
Returns the current IDF vector, docFreq, number of documents
idf() - 类 中的方法org.apache.spark.mllib.feature.IDFModel
 
IDF.DocumentFrequencyAggregator - org.apache.spark.mllib.feature中的类
Document frequency aggregator.
IDFBase - org.apache.spark.ml.feature中的接口
Params for IDF and IDFModel.
IDFModel - org.apache.spark.ml.feature中的类
Model fitted by IDF.
IDFModel - org.apache.spark.mllib.feature中的类
Represents an IDF model that can transform term frequency vectors.
ifPartitionNotExists() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
ImageDataSource - org.apache.spark.ml.source.image中的类
image package implements Spark SQL data source API for loading image data as DataFrame.
ImageDataSource() - 类 的构造器org.apache.spark.ml.source.image.ImageDataSource
 
imageFields() - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
 
ImageSchema - org.apache.spark.ml.image中的类
Defines the image schema and methods to read and manipulate images.
ImageSchema() - 类 的构造器org.apache.spark.ml.image.ImageSchema
 
imageSchema() - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
DataFrame with a single column of images named "image" (nullable)
implicitPrefs() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
implicitPrefs() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Param to decide whether to use implicit preference.
implicits() - 类 中的方法org.apache.spark.sql.SparkSession
Accessor for nested Scala object
implicits() - 类 中的方法org.apache.spark.sql.SQLContext
Accessor for nested Scala object
implicits$() - 类 的构造器org.apache.spark.sql.SparkSession.implicits$
 
implicits$() - 类 的构造器org.apache.spark.sql.SQLContext.implicits$
 
improveException(Object, NotSerializableException) - 类 中的静态方法org.apache.spark.serializer.SerializationDebugger
Improve the given NotSerializableException with the serialization path leading from the given object to the problematic object.
Impurities - org.apache.spark.mllib.tree.impurity中的类
Factory for Impurity instances.
Impurities() - 类 的构造器org.apache.spark.mllib.tree.impurity.Impurities
 
impurity() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
impurity() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
impurity() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
impurity() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
impurity() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
impurity() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
impurity() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
impurity() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
impurity() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
impurity() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
impurity() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
impurity() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
impurity() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
impurity() - 接口 中的方法org.apache.spark.ml.tree.HasVarianceImpurity
Criterion used for information gain calculation (case-insensitive).
impurity() - 类 中的方法org.apache.spark.ml.tree.InternalNode
 
impurity() - 类 中的方法org.apache.spark.ml.tree.LeafNode
 
impurity() - 类 中的方法org.apache.spark.ml.tree.Node
Impurity measure at this node (for training data)
impurity() - 接口 中的方法org.apache.spark.ml.tree.TreeClassifierParams
Criterion used for information gain calculation (case-insensitive).
impurity() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
Impurity - org.apache.spark.mllib.tree.impurity中的接口
Trait for calculating information gain.
impurity() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
impurity() - 类 中的方法org.apache.spark.mllib.tree.model.InformationGainStats
 
impurity() - 类 中的方法org.apache.spark.mllib.tree.model.Node
 
impurityStats() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
Imputer - org.apache.spark.ml.feature中的类
Imputation estimator for completing missing values, either using the mean or the median of the columns in which the missing values are located.
Imputer(String) - 类 的构造器org.apache.spark.ml.feature.Imputer
 
Imputer() - 类 的构造器org.apache.spark.ml.feature.Imputer
 
ImputerModel - org.apache.spark.ml.feature中的类
Model fitted by Imputer.
ImputerParams - org.apache.spark.ml.feature中的接口
Params for Imputer and ImputerModel.
In() - 类 中的静态方法org.apache.spark.graphx.EdgeDirection
Edges arriving at a vertex.
In - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to one of the values in the array.
In(String, Object[]) - 类 的构造器org.apache.spark.sql.sources.In
 
INACTIVE() - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverState
 
inArray(Object) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Check for value in an allowed set of values.
inArray(List<T>) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Check for value in an allowed set of values.
InBlock$() - 类 的构造器org.apache.spark.ml.recommendation.ALS.InBlock$
 
InboxMessage - org.apache.spark.rpc.netty中的接口
 
IncompatibleMergeException - org.apache.spark.util.sketch中的异常错误
 
IncompatibleMergeException(String) - 异常错误 的构造器org.apache.spark.util.sketch.IncompatibleMergeException
 
incrementFetchedPartitions(int) - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
 
incrementFileCacheHits(int) - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
 
incrementFilesDiscovered(int) - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
 
incrementHiveClientCalls(int) - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
 
incrementParallelListingJobCount(int) - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
 
inDegrees() - 类 中的方法org.apache.spark.graphx.GraphOps
 
independence() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
 
INDETERMINATE() - 类 中的静态方法org.apache.spark.rdd.DeterministicLevel
 
index() - 类 中的方法org.apache.spark.ml.attribute.Attribute
Index of the attribute.
INDEX() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
index() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
index() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
index() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
index() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
index(int, int) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Return the index for the (i, j)-th element in the backing array.
index() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRow
 
index(int, int) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Return the index for the (i, j)-th element in the backing array.
index() - 接口 中的方法org.apache.spark.Partition
Get the partition's index within its parent RDD
index() - 类 中的方法org.apache.spark.scheduler.TaskInfo
The index of this task within its task set.
index() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
IndexedRow - org.apache.spark.mllib.linalg.distributed中的类
Represents a row of IndexedRowMatrix.
IndexedRow(long, Vector) - 类 的构造器org.apache.spark.mllib.linalg.distributed.IndexedRow
 
IndexedRowMatrix - org.apache.spark.mllib.linalg.distributed中的类
Represents a row-oriented DistributedMatrix with indexed rows.
IndexedRowMatrix(RDD<IndexedRow>, long, int) - 类 的构造器org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
 
IndexedRowMatrix(RDD<IndexedRow>) - 类 的构造器org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Alternative constructor leaving matrix dimensions to be determined automatically.
indexName(String) - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
indexOf(String) - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Index of an attribute specified by name.
indexOf(String) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Index of a specific value.
indexOf(Object) - 类 中的方法org.apache.spark.ml.feature.HashingTF
Returns the index of the input term.
indexOf(Object) - 类 中的方法org.apache.spark.mllib.feature.HashingTF
Returns the index of the input term.
indexToLevel(int) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Return the level of a tree which the given node is in.
IndexToString - org.apache.spark.ml.feature中的类
A Transformer that maps a column of indices back to a new column of corresponding string values.
IndexToString(String) - 类 的构造器org.apache.spark.ml.feature.IndexToString
 
IndexToString() - 类 的构造器org.apache.spark.ml.feature.IndexToString
 
indices() - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
An array of indices to select features from a vector column.
indices() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
indices() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
inferSchema(SparkSession, Map<String, String>, Seq<FileStatus>) - 类 中的方法org.apache.spark.sql.hive.execution.HiveFileFormat
 
inferSchema(CatalogTable) - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
Infers the schema for Hive serde tables and returns the CatalogTable with the inferred schema.
inferSchema(SparkSession, Map<String, String>, Seq<FileStatus>) - 类 中的方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
info() - 类 中的方法org.apache.spark.status.LiveRDD
 
info() - 类 中的方法org.apache.spark.status.LiveStage
 
info() - 类 中的方法org.apache.spark.status.LiveTask
 
infoChanged(SparkAppHandle) - 接口 中的方法org.apache.spark.launcher.SparkAppHandle.Listener
Callback for changes in any information that is not the handle's state.
infoGain() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
InformationGainStats - org.apache.spark.mllib.tree.model中的类
:: DeveloperApi :: Information gain statistics for each split param: gain information gain value param: impurity current node impurity param: leftImpurity left node impurity param: rightImpurity right node impurity param: leftPredict left node predict param: rightPredict right node predict
InformationGainStats(double, double, double, double, Predict, Predict) - 类 的构造器org.apache.spark.mllib.tree.model.InformationGainStats
 
init(ExecutorPluginContext) - 接口 中的方法org.apache.spark.ExecutorPlugin
Initialize the executor plugin.
init(FilterConfig) - 类 中的方法org.apache.spark.ui.HttpSecurityFilter
 
initcap(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns a new string column by converting the first letter of each word to uppercase.
initDaemon(Logger) - 类 中的静态方法org.apache.spark.util.Utils
Utility function that should be called early in main() for daemons to set up some common diagnostic state.
initHadoopOutputMetrics(TaskContext) - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriterUtils
 
initialHash() - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
initialize(boolean, SparkConf, org.apache.spark.SecurityManager) - 接口 中的方法org.apache.spark.broadcast.BroadcastFactory
 
initialize(double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
 
initialize(double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
 
initialize(double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
 
initialize(double, double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
 
initialize(RDD<Tuple2<Object, Vector>>, LDA) - 接口 中的方法org.apache.spark.mllib.clustering.LDAOptimizer
Initializer for the optimizer.
initialize() - 类 中的静态方法org.apache.spark.rdd.InputFileBlockHolder
Initializes thread local by explicitly getting the value.
initialize(TaskScheduler, SchedulerBackend) - 接口 中的方法org.apache.spark.scheduler.ExternalClusterManager
Initialize task scheduler and backend scheduler.
initialize(String, CaseInsensitiveStringMap) - 接口 中的方法org.apache.spark.sql.connector.catalog.CatalogPlugin
Called to initialize configuration.
initialize(String, CaseInsensitiveStringMap) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
initialize(MutableAggregationBuffer) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Initializes the given aggregation buffer, i.e. the zero value of the aggregation buffer.
initializeApplication() - 接口 中的方法org.apache.spark.shuffle.api.ShuffleDriverComponents
Called once in the driver to bootstrap this module that is specific to this application.
Initialized() - 类 中的静态方法org.apache.spark.rdd.CheckpointState
 
initializeExecutor(String, String, Map<String, String>) - 接口 中的方法org.apache.spark.shuffle.api.ShuffleExecutorComponents
Called once per executor to bootstrap this module with state that is specific to that executor, specifically the application ID and executor ID.
initializeLogging(boolean, boolean) - 接口 中的方法org.apache.spark.internal.Logging
 
initializeLogIfNecessary(boolean) - 接口 中的方法org.apache.spark.internal.Logging
 
initializeLogIfNecessary(boolean, boolean) - 接口 中的方法org.apache.spark.internal.Logging
 
initialOffset() - 接口 中的方法org.apache.spark.sql.connector.read.streaming.SparkDataStream
Returns the initial offset for a streaming query to start reading from.
initialState(RDD<Tuple2<KeyType, StateType>>) - 类 中的方法org.apache.spark.streaming.StateSpec
Set the RDD containing the initial states that will be used by mapWithState
initialState(JavaPairRDD<KeyType, StateType>) - 类 中的方法org.apache.spark.streaming.StateSpec
Set the RDD containing the initial states that will be used by mapWithState
initialValue() - 类 中的方法org.apache.spark.partial.PartialResult
 
initialWeights() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
initialWeights() - 接口 中的方法org.apache.spark.ml.classification.MultilayerPerceptronParams
The initial weights of the model.
initInputSerDe(Seq<Expression>) - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
initMode() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
initMode() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
initMode() - 接口 中的方法org.apache.spark.ml.clustering.KMeansParams
Param for the initialization algorithm.
initMode() - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
initMode() - 接口 中的方法org.apache.spark.ml.clustering.PowerIterationClusteringParams
Param for the initialization algorithm.
initModel(DenseVector<Object>, Random) - 接口 中的方法org.apache.spark.ml.ann.Layer
Returns the instance of the layer with random generated weights.
initOutputFormat(JobContext) - 类 中的方法org.apache.spark.internal.io.HadoopWriteConfigUtil
 
initOutputSerDe(Seq<Attribute>) - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
initSteps() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
initSteps() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
initSteps() - 接口 中的方法org.apache.spark.ml.clustering.KMeansParams
Param for the number of steps for the k-means|| initialization mode.
initWriter(TaskAttemptContext, int) - 类 中的方法org.apache.spark.internal.io.HadoopWriteConfigUtil
 
injectCheckRule(Function1<SparkSession, Function1<LogicalPlan, BoxedUnit>>) - 类 中的方法org.apache.spark.sql.SparkSessionExtensions
Inject an check analysis Rule builder into the SparkSession.
injectColumnar(Function1<SparkSession, ColumnarRule>) - 类 中的方法org.apache.spark.sql.SparkSessionExtensions
Inject a rule that can override the columnar execution of an executor.
injectFunction(Tuple3<FunctionIdentifier, ExpressionInfo, Function1<Seq<Expression>, Expression>>) - 类 中的方法org.apache.spark.sql.SparkSessionExtensions
Injects a custom function into the FunctionRegistry at runtime for all sessions.
injectOptimizerRule(Function1<SparkSession, Rule<LogicalPlan>>) - 类 中的方法org.apache.spark.sql.SparkSessionExtensions
Inject an optimizer Rule builder into the SparkSession.
injectParser(Function2<SparkSession, ParserInterface, ParserInterface>) - 类 中的方法org.apache.spark.sql.SparkSessionExtensions
Inject a custom parser into the SparkSession.
injectPlannerStrategy(Function1<SparkSession, SparkStrategy>) - 类 中的方法org.apache.spark.sql.SparkSessionExtensions
Inject a planner Strategy builder into the SparkSession.
injectPostHocResolutionRule(Function1<SparkSession, Rule<LogicalPlan>>) - 类 中的方法org.apache.spark.sql.SparkSessionExtensions
Inject an analyzer Rule builder into the SparkSession.
injectResolutionRule(Function1<SparkSession, Rule<LogicalPlan>>) - 类 中的方法org.apache.spark.sql.SparkSessionExtensions
Inject an analyzer resolution Rule builder into the SparkSession.
InnerClosureFinder - org.apache.spark.util中的类
 
InnerClosureFinder(Set<Class<?>>) - 类 的构造器org.apache.spark.util.InnerClosureFinder
 
innerJoin(EdgeRDD<ED2>, Function4<Object, Object, ED, ED2, ED3>, ClassTag<ED2>, ClassTag<ED3>) - 类 中的方法org.apache.spark.graphx.EdgeRDD
Inner joins this EdgeRDD with another EdgeRDD, assuming both are partitioned using the same PartitionStrategy.
innerJoin(EdgeRDD<ED2>, Function4<Object, Object, ED, ED2, ED3>, ClassTag<ED2>, ClassTag<ED3>) - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
innerJoin(RDD<Tuple2<Object, U>>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
innerJoin(RDD<Tuple2<Object, U>>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.VertexRDD
Inner joins this VertexRDD with an RDD containing vertex attribute pairs.
innerZipJoin(VertexRDD<U>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
innerZipJoin(VertexRDD<U>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.VertexRDD
Efficiently inner joins this VertexRDD with another VertexRDD sharing the same index.
inPlace() - 接口 中的方法org.apache.spark.ml.ann.Layer
If true, the memory is not allocated for the output of this layer.
InProcessLauncher - org.apache.spark.launcher中的类
In-process launcher for Spark applications.
InProcessLauncher() - 类 的构造器org.apache.spark.launcher.InProcessLauncher
 
input() - 类 中的方法org.apache.spark.ml.TransformStart
 
input() - 类 中的方法org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
INPUT() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
input$() - 类 的构造器org.apache.spark.InternalAccumulator.input$
 
input_file_name() - 类 中的静态方法org.apache.spark.sql.functions
Creates a string column for the file name of the current Spark task.
INPUT_FORMAT() - 类 中的静态方法org.apache.spark.sql.hive.execution.HiveOptions
 
INPUT_METRICS_PREFIX() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
INPUT_RECORDS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
INPUT_SIZE() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
inputBytes() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
inputBytes() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.IDF
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.Imputer
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.MaxAbsScaler
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.PCA
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
inputCol() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
inputCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasInputCol
Param for input column name.
inputCol() - 类 中的方法org.apache.spark.ml.UnaryTransformer
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.Imputer
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.Interaction
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
inputCols() - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
 
inputCols() - 接口 中的方法org.apache.spark.ml.param.shared.HasInputCols
Param for input column names.
inputDStream() - 类 中的方法org.apache.spark.streaming.api.java.JavaInputDStream
 
inputDStream() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairInputDStream
 
InputDStream<T> - org.apache.spark.streaming.dstream中的类
This is the abstract base class for all input streams.
InputDStream(StreamingContext, ClassTag<T>) - 类 的构造器org.apache.spark.streaming.dstream.InputDStream
 
InputFileBlockHolder - org.apache.spark.rdd中的类
This holds file names of the current Spark task.
InputFileBlockHolder() - 类 的构造器org.apache.spark.rdd.InputFileBlockHolder
 
inputFiles() - 类 中的方法org.apache.spark.sql.Dataset
Returns a best-effort snapshot of the files that compose this Dataset.
inputFormat() - 类 中的方法org.apache.spark.sql.hive.execution.HiveOptions
 
inputFormatClazz() - 类 中的方法org.apache.spark.scheduler.InputFormatInfo
 
inputFormatClazz() - 类 中的方法org.apache.spark.scheduler.SplitInfo
 
InputFormatInfo - org.apache.spark.scheduler中的类
:: DeveloperApi :: Parses and holds information about inputFormat (and files) specified as a parameter.
InputFormatInfo(Configuration, Class<?>, String) - 类 的构造器org.apache.spark.scheduler.InputFormatInfo
 
InputMetricDistributions - org.apache.spark.status.api.v1中的类
 
InputMetrics - org.apache.spark.status.api.v1中的类
 
inputMetrics() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
inputMetrics() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
InputPartition - org.apache.spark.sql.connector.read中的接口
A serializable representation of an input partition returned by Batch.planInputPartitions() and the corresponding ones in streaming .
inputRecords() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
inputRecords() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
inputRowFormat() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
inputRowFormatMap() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
inputRowsPerSecond() - 类 中的方法org.apache.spark.sql.streaming.SourceProgress
 
inputRowsPerSecond() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
The aggregate (across all sources) rate of data arriving.
inputSchema() - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
A StructType represents data types of input arguments of this aggregate function.
inputSerdeClass() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
inputSerdeProps() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
inputSize() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
inputStreamId() - 类 中的方法org.apache.spark.streaming.scheduler.StreamInputInfo
 
inRange(double, double, boolean, boolean) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Check for value in range lowerBound to upperBound.
inRange(double, double) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Version of `inRange()` which uses inclusive be default: [lowerBound, upperBound]
insert(Dataset<Row>, boolean) - 接口 中的方法org.apache.spark.sql.sources.InsertableRelation
 
InsertableRelation - org.apache.spark.sql.sources中的接口
A BaseRelation that can be used to insert data into it through the insert method.
insertInto(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Inserts the content of the DataFrame to the specified table.
InsertIntoHiveDirCommand - org.apache.spark.sql.hive.execution中的类
Command for writing the results of query to file system.
InsertIntoHiveDirCommand(boolean, CatalogStorageFormat, LogicalPlan, boolean, Seq<String>) - 类 的构造器org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
InsertIntoHiveTable - org.apache.spark.sql.hive.execution中的类
Command for writing data out to a Hive table.
InsertIntoHiveTable(CatalogTable, Map<String, Option<String>>, LogicalPlan, boolean, boolean, Seq<String>) - 类 的构造器org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
inShutdown() - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
Detect whether this thread might be executing a shutdown hook.
inspectorToDataType(ObjectInspector) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
inspectorToDataType(ObjectInspector) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
instance() - 类 中的方法org.apache.spark.ml.LoadInstanceEnd
 
instance() - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Entropy
Get this impurity instance.
instance() - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Gini
Get this impurity instance.
instance() - 类 中的静态方法org.apache.spark.mllib.tree.impurity.Variance
Get this impurity instance.
INSTANCE - 类 中的静态变量org.apache.spark.serializer.DummySerializerInstance
 
INSTANT() - 类 中的静态方法org.apache.spark.sql.Encoders
Creates an encoder that serializes instances of the java.time.Instant class to the internal representation of nullable Catalyst's TimestampType.
instantiate(String, String, String, boolean) - 类 中的静态方法org.apache.spark.internal.io.FileCommitProtocol
Instantiates a FileCommitProtocol using the given className.
instr(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Locate the position of the first occurrence of substr column in the given string.
INT() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable int type.
IntArrayParam - org.apache.spark.ml.param中的类
:: DeveloperApi :: Specialized version of Param[Array[Int} for Java.
IntArrayParam(Params, String, String, Function1<int[], Object>) - 类 的构造器org.apache.spark.ml.param.IntArrayParam
 
IntArrayParam(Params, String, String) - 类 的构造器org.apache.spark.ml.param.IntArrayParam
 
IntegerExactNumeric - org.apache.spark.sql.types中的类
 
IntegerExactNumeric() - 类 的构造器org.apache.spark.sql.types.IntegerExactNumeric
 
IntegerType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the IntegerType object.
IntegerType - org.apache.spark.sql.types中的类
The data type representing Int values.
IntegerType() - 类 的构造器org.apache.spark.sql.types.IntegerType
 
INTER_JOB_WAIT_MS() - 类 中的静态方法org.apache.spark.ui.UIWorkloadGenerator
 
interact(Term) - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
interact(Term) - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
interact(Term) - 接口 中的方法org.apache.spark.ml.feature.InteractableTerm
Interactions of interactable terms.
interact(Term) - 接口 中的方法org.apache.spark.ml.feature.Term
Default interactions of a Term
InteractableTerm - org.apache.spark.ml.feature中的接口
A term that may be part of an interaction, e.g.
Interaction - org.apache.spark.ml.feature中的类
Implements the feature interaction transform.
Interaction(String) - 类 的构造器org.apache.spark.ml.feature.Interaction
 
Interaction() - 类 的构造器org.apache.spark.ml.feature.Interaction
 
intercept() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
intercept() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
The model intercept for "binomial" logistic regression.
intercept() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
intercept() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
intercept() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
intercept() - 类 中的方法org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
 
intercept() - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionModel
 
intercept() - 类 中的方法org.apache.spark.mllib.classification.SVMModel
 
intercept() - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearModel
 
intercept() - 类 中的方法org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
 
intercept() - 类 中的方法org.apache.spark.mllib.regression.LassoModel
 
intercept() - 类 中的方法org.apache.spark.mllib.regression.LinearRegressionModel
 
intercept() - 类 中的方法org.apache.spark.mllib.regression.RidgeRegressionModel
 
interceptVector() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
intermediateStorageLevel() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
intermediateStorageLevel() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Param for StorageLevel for intermediate datasets.
InternalAccumulator - org.apache.spark中的类
A collection of fields and methods concerned with internal accumulators that represent task level metrics.
InternalAccumulator() - 类 的构造器org.apache.spark.InternalAccumulator
 
InternalAccumulator.input$ - org.apache.spark中的类
 
InternalAccumulator.output$ - org.apache.spark中的类
 
InternalAccumulator.shuffleRead$ - org.apache.spark中的类
 
InternalAccumulator.shuffleWrite$ - org.apache.spark中的类
 
InternalKMeansModelWriter - org.apache.spark.ml.clustering中的类
A writer for KMeans that handles the "internal" (or default) format
InternalKMeansModelWriter() - 类 的构造器org.apache.spark.ml.clustering.InternalKMeansModelWriter
 
InternalLinearRegressionModelWriter - org.apache.spark.ml.regression中的类
A writer for LinearRegression that handles the "internal" (or default) format
InternalLinearRegressionModelWriter() - 类 的构造器org.apache.spark.ml.regression.InternalLinearRegressionModelWriter
 
InternalNode - org.apache.spark.ml.tree中的类
Internal Decision Tree node.
InterruptibleIterator<T> - org.apache.spark中的类
:: DeveloperApi :: An iterator that wraps around an existing iterator to provide task killing functionality.
InterruptibleIterator(TaskContext, Iterator<T>) - 类 的构造器org.apache.spark.InterruptibleIterator
 
interruptThread() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
 
interruptThread() - 类 中的方法org.apache.spark.scheduler.local.KillTask
 
intersect(Dataset<T>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset containing rows only in both this Dataset and another Dataset.
intersectAll(Dataset<T>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset containing rows only in both this Dataset and another Dataset while preserving the duplicates.
intersection(JavaDoubleRDD) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return the intersection of this RDD and another one.
intersection(JavaPairRDD<K, V>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return the intersection of this RDD and another one.
intersection(JavaRDD<T>) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return the intersection of this RDD and another one.
intersection(RDD<T>) - 类 中的方法org.apache.spark.rdd.RDD
Return the intersection of this RDD and another one.
intersection(RDD<T>, Partitioner, Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Return the intersection of this RDD and another one.
intersection(RDD<T>, int) - 类 中的方法org.apache.spark.rdd.RDD
Return the intersection of this RDD and another one.
IntParam - org.apache.spark.ml.param中的类
:: DeveloperApi :: Specialized version of Param[Int] for Java.
IntParam(String, String, String, Function1<Object, Object>) - 类 的构造器org.apache.spark.ml.param.IntParam
 
IntParam(String, String, String) - 类 的构造器org.apache.spark.ml.param.IntParam
 
IntParam(Identifiable, String, String, Function1<Object, Object>) - 类 的构造器org.apache.spark.ml.param.IntParam
 
IntParam(Identifiable, String, String) - 类 的构造器org.apache.spark.ml.param.IntParam
 
IntParam - org.apache.spark.util中的类
An extractor object for parsing strings into integers.
IntParam() - 类 的构造器org.apache.spark.util.IntParam
 
invalidateSerializedMapOutputStatusCache() - 类 中的方法org.apache.spark.ShuffleStatus
Clears the cached serialized map output statuses.
invalidateTable(Identifier) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
invalidateTable(Identifier) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableCatalog
Invalidate cached table metadata for an identifier.
inverse() - 类 中的方法org.apache.spark.ml.feature.DCT
Indicates whether to perform the inverse DCT (true) or forward DCT (false).
inverse(double[], int) - 类 中的静态方法org.apache.spark.mllib.linalg.CholeskyDecomposition
Computes the inverse of a real symmetric positive definite matrix A using the Cholesky factorization A = U**T*U.
Inverse$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
 
invokedMethod(Object, Class<?>, String) - 类 中的静态方法org.apache.spark.graphx.util.BytecodeUtils
Test whether the given closure invokes the specified method in the specified class.
invokeWriteReplace(Object) - 类 中的方法org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
 
ioEncryptionKey() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
 
ioschema() - 类 中的方法org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
is32BitDecimalType(DataType) - 类 中的静态方法org.apache.spark.sql.types.DecimalType
Returns if dt is a DecimalType that fits inside an int
is64BitDecimalType(DataType) - 类 中的静态方法org.apache.spark.sql.types.DecimalType
Returns if dt is a DecimalType that fits inside a long
IS_TESTING() - 类 中的静态方法org.apache.spark.internal.config.Tests
 
isActive() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Returns true if this query is actively running.
isActive() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
isActive() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
isActive() - 类 中的方法org.apache.spark.status.LiveExecutor
 
isAddIntercept() - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
Get if the algorithm uses addIntercept
isAllowed(Enumeration.Value, Enumeration.Value) - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
isBarrier() - 类 中的方法org.apache.spark.storage.RDDInfo
 
isBatchingEnabled(SparkConf, boolean) - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
isBindCollision(Throwable) - 类 中的静态方法org.apache.spark.util.Utils
Return whether the exception is caused by an address-port collision when binding.
isBlacklisted() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
isBlacklisted() - 类 中的方法org.apache.spark.status.LiveExecutor
 
isBlacklisted() - 类 中的方法org.apache.spark.status.LiveExecutorStageSummary
 
isBlacklistedForStage() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
isBroadcast() - 类 中的方法org.apache.spark.storage.BlockId
 
isBucket() - 类 中的方法org.apache.spark.sql.catalog.Column
 
isByteArrayDecimalType(DataType) - 类 中的静态方法org.apache.spark.sql.types.DecimalType
Returns if dt is a DecimalType that doesn't fit inside a long
isCached(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Returns true if the table is currently cached in-memory.
isCached(String) - 类 中的方法org.apache.spark.sql.SQLContext
Returns true if the table is currently cached in-memory.
isCached() - 类 中的方法org.apache.spark.storage.BlockStatus
 
isCached() - 类 中的方法org.apache.spark.storage.RDDInfo
 
isCancelled() - 类 中的方法org.apache.spark.ComplexFutureAction
 
isCancelled() - 接口 中的方法org.apache.spark.FutureAction
Returns whether the action has been cancelled.
isCancelled() - 类 中的方法org.apache.spark.SimpleFutureAction
 
isCascadingTruncateTable() - 类 中的方法org.apache.spark.sql.jdbc.AggregatedDialect
 
isCascadingTruncateTable() - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
isCascadingTruncateTable() - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
isCascadingTruncateTable() - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
Return Some[true] iff TRUNCATE TABLE causes cascading default.
isCascadingTruncateTable() - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
isCascadingTruncateTable() - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
isCascadingTruncateTable() - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
isCascadingTruncateTable() - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
isCascadingTruncateTable() - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
isCascadingTruncateTable() - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
isCheckpointed() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return whether this RDD has been checkpointed or not
isCheckpointed() - 类 中的方法org.apache.spark.graphx.Graph
Return whether this Graph has been checkpointed or not.
isCheckpointed() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
isCheckpointed() - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
isCheckpointed() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
isCheckpointed() - 类 中的方法org.apache.spark.rdd.RDD
Return whether this RDD is checkpointed and materialized, either reliably or locally.
isClientMode(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
 
isCliSessionState() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
Check current Thread's SessionState type
isColMajor() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Indicates whether the values backing this matrix are arranged in column major order.
isCompatible(BloomFilter) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
Determines whether a given bloom filter is compatible with this bloom filter.
isCompleted() - 类 中的方法org.apache.spark.BarrierTaskContext
 
isCompleted() - 类 中的方法org.apache.spark.ComplexFutureAction
 
isCompleted() - 接口 中的方法org.apache.spark.FutureAction
Returns whether the action has already been completed with a value or an exception.
isCompleted() - 类 中的方法org.apache.spark.SimpleFutureAction
 
isCompleted() - 类 中的方法org.apache.spark.TaskContext
Returns true if the task has completed.
isConnectorUsingCurrentToken(Map<String, Object>, Option<KafkaTokenClusterConf>) - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenUtil
 
isDataAvailable() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryStatus
 
isDefined(Param<?>) - 接口 中的方法org.apache.spark.ml.param.Params
Checks whether a param is explicitly set or has a default value.
isDistributed() - 类 中的方法org.apache.spark.ml.clustering.DistributedLDAModel
 
isDistributed() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
Indicates whether this instance is of type DistributedLDAModel
isDistributed() - 类 中的方法org.apache.spark.ml.clustering.LocalLDAModel
 
isDriver() - 类 中的方法org.apache.spark.storage.BlockManagerId
 
isDynamicAllocationEnabled(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
Return whether dynamic allocation is enabled in the given conf.
isEmpty() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
 
isEmpty() - 类 中的方法org.apache.spark.rdd.RDD
 
isEmpty() - 类 中的方法org.apache.spark.sql.Dataset
Returns true if the Dataset is empty.
isEmpty() - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
isEncryptionEnabled(JavaSparkContext) - 类 中的静态方法org.apache.spark.api.r.RUtils
 
isExecutorActive(String) - 接口 中的方法org.apache.spark.ExecutorAllocationClient
Whether an executor is active.
IsExecutorAlive(String) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive
 
IsExecutorAlive$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive$
 
isExecutorStartupConf(String) - 类 中的静态方法org.apache.spark.SparkConf
Return whether the given config should be passed to an executor on start-up.
isExperiment() - 类 中的方法org.apache.spark.mllib.stat.test.BinarySample
 
isFailed(Enumeration.Value) - 类 中的静态方法org.apache.spark.TaskState
 
isFatalError(Throwable) - 类 中的静态方法org.apache.spark.util.Utils
Returns true if the given exception was fatal.
isFile(Path) - 类 中的静态方法org.apache.spark.ml.image.SamplePathFilter
 
isFileSplittable(Path, CompressionCodecFactory) - 类 中的静态方法org.apache.spark.util.Utils
Check whether the file of the path is splittable.
isFinal() - 枚举 中的方法org.apache.spark.launcher.SparkAppHandle.State
Whether this state is a final state, meaning the application is not running anymore once it's reached.
isFinished(Enumeration.Value) - 类 中的静态方法org.apache.spark.TaskState
 
isGlobalJaasConfigurationProvided() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenUtil
 
isHive23() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
isIgnorableException(Throwable) - 接口 中的方法org.apache.spark.util.ListenerBus
Allows bus implementations to prevent error logging for certain exceptions.
isin(Object...) - 类 中的方法org.apache.spark.sql.Column
A boolean expression that is evaluated to true if the value of this expression is contained by the evaluated values of the arguments.
isin(Seq<Object>) - 类 中的方法org.apache.spark.sql.Column
A boolean expression that is evaluated to true if the value of this expression is contained by the evaluated values of the arguments.
isInCollection(Iterable<?>) - 类 中的方法org.apache.spark.sql.Column
A boolean expression that is evaluated to true if the value of this expression is contained by the provided collection.
isInCollection(Iterable<?>) - 类 中的方法org.apache.spark.sql.Column
A boolean expression that is evaluated to true if the value of this expression is contained by the provided collection.
isInDirectory(File, File) - 类 中的静态方法org.apache.spark.util.Utils
Return whether the specified file is a parent directory of the child file.
isInitialValueFinal() - 类 中的方法org.apache.spark.partial.PartialResult
 
isInterrupted() - 类 中的方法org.apache.spark.BarrierTaskContext
 
isInterrupted() - 类 中的方法org.apache.spark.TaskContext
Returns true if the task has been killed.
isLargerBetter() - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
isLargerBetter() - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
isLargerBetter() - 类 中的方法org.apache.spark.ml.evaluation.Evaluator
Indicates whether the metric returned by evaluate should be maximized (true, default) or minimized (false).
isLargerBetter() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
isLargerBetter() - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
isLargerBetter() - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
isLargerBetter() - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
isLeaf() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
isLeaf() - 类 中的方法org.apache.spark.mllib.tree.model.Node
 
isLeftChild(int) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Returns true if this is a left child.
isLocal() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
isLocal - 类 中的变量org.apache.spark.ExecutorPluginContext
 
isLocal() - 类 中的方法org.apache.spark.SparkContext
 
isLocal() - 类 中的方法org.apache.spark.sql.Dataset
Returns true if the collect and take methods can be run locally (without any Spark executors).
isLocal() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
isLocalMaster(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
 
isLocalUri(String) - 类 中的静态方法org.apache.spark.util.Utils
Returns whether the URI is a "local:" URI.
isMac() - 类 中的静态方法org.apache.spark.util.Utils
Whether the underlying operating system is Mac OS X.
isModifiable(String) - 类 中的方法org.apache.spark.sql.RuntimeConfig
Indicates whether the configuration property with the given key is modifiable in the current session.
isMulticlassClassification() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
isMulticlassWithCategoricalFeatures() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
isMultipleOf(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
isMultipleOf(Duration) - 类 中的方法org.apache.spark.streaming.Time
 
isNaN() - 类 中的方法org.apache.spark.sql.Column
True if the current expression is NaN.
isnan(Column) - 类 中的静态方法org.apache.spark.sql.functions
Return true iff the column is NaN.
isNominal() - 类 中的方法org.apache.spark.ml.attribute.Attribute
Tests whether this attribute is nominal, true for NominalAttribute and BinaryAttribute.
isNominal() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
isNominal() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
isNominal() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
isNominal() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
isNotNull() - 类 中的方法org.apache.spark.sql.Column
True if the current expression is NOT null.
IsNotNull - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to a non-null value.
IsNotNull(String) - 类 的构造器org.apache.spark.sql.sources.IsNotNull
 
isNull() - 类 中的方法org.apache.spark.sql.Column
True if the current expression is null.
isnull(Column) - 类 中的静态方法org.apache.spark.sql.functions
Return true iff the column is null.
IsNull - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to null.
IsNull(String) - 类 的构造器org.apache.spark.sql.sources.IsNull
 
isNullable() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.AddColumn
 
isNullable() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType
 
isNullAt(int) - 接口 中的方法org.apache.spark.sql.Row
Checks whether the value at position i is null.
isNullAt(int) - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
isNullAt(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
isNullAt(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
isNullAt(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns whether the value at rowId is NULL.
isNumeric() - 类 中的方法org.apache.spark.ml.attribute.Attribute
Tests whether this attribute is numeric, true for NumericAttribute and BinaryAttribute.
isNumeric() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
isNumeric() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
isNumeric() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
isNumeric() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
IsolatedRpcEndpoint - org.apache.spark.rpc中的接口
An endpoint that uses a dedicated thread pool for delivering messages.
isOpen() - 类 中的方法org.apache.spark.security.CryptoStreamUtils.ErrorHandlingReadableChannel
 
isOpen() - 类 中的方法org.apache.spark.storage.CountingWritableChannel
 
isOrdinal() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
isotonic() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
isotonic() - 接口 中的方法org.apache.spark.ml.regression.IsotonicRegressionBase
Param for whether the output sequence should be isotonic/increasing (true) or antitonic/decreasing (false).
isotonic() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
isotonic() - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegressionModel
 
IsotonicRegression - org.apache.spark.ml.regression中的类
Isotonic regression.
IsotonicRegression(String) - 类 的构造器org.apache.spark.ml.regression.IsotonicRegression
 
IsotonicRegression() - 类 的构造器org.apache.spark.ml.regression.IsotonicRegression
 
IsotonicRegression - org.apache.spark.mllib.regression中的类
Isotonic regression.
IsotonicRegression() - 类 的构造器org.apache.spark.mllib.regression.IsotonicRegression
Constructs IsotonicRegression instance with default parameter isotonic = true.
IsotonicRegressionBase - org.apache.spark.ml.regression中的接口
Params for isotonic regression.
IsotonicRegressionModel - org.apache.spark.ml.regression中的类
Model fitted by IsotonicRegression.
IsotonicRegressionModel - org.apache.spark.mllib.regression中的类
Regression model for isotonic regression.
IsotonicRegressionModel(double[], double[], boolean) - 类 的构造器org.apache.spark.mllib.regression.IsotonicRegressionModel
 
IsotonicRegressionModel(Iterable<Object>, Iterable<Object>, Boolean) - 类 的构造器org.apache.spark.mllib.regression.IsotonicRegressionModel
A Java-friendly constructor that takes two Iterable parameters and one Boolean parameter.
isOutputSpecValidationEnabled(SparkConf) - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriterUtils
 
isPartition() - 类 中的方法org.apache.spark.sql.catalog.Column
 
isPresent() - 类 中的方法org.apache.spark.api.java.Optional
 
isProcessRunning(int) - 类 中的静态方法org.apache.spark.util.Utils
Given a process id, return true if the process is still running.
isRDD() - 类 中的方法org.apache.spark.storage.BlockId
 
isReady() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
 
isRegistered() - 类 中的方法org.apache.spark.util.AccumulatorV2
Returns true if this accumulator has been registered.
isRInstalled() - 类 中的静态方法org.apache.spark.api.r.RUtils
Check if R is installed before running tests that use R commands.
isRowMajor() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Indicates whether the values backing this matrix are arranged in row major order.
isSessionCatalog(CatalogPlugin) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
 
isSet(Param<?>) - 接口 中的方法org.apache.spark.ml.param.Params
Checks whether a param is explicitly set.
isShuffle() - 类 中的方法org.apache.spark.storage.BlockId
 
isSparkPortConf(String) - 类 中的静态方法org.apache.spark.SparkConf
Return true if the given config matches either spark.*.port or spark.port.
isSparkRInstalled() - 类 中的静态方法org.apache.spark.api.r.RUtils
Check if SparkR is installed before running tests that use SparkR.
isSplitable(SparkSession, Map<String, String>, Path) - 类 中的方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
isStarted() - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Check if the receiver has started or not.
isStopped() - 类 中的方法org.apache.spark.SparkContext
 
isStopped() - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Check if receiver has been marked for stopping.
isStreaming() - 类 中的方法org.apache.spark.sql.Dataset
Returns true if this Dataset contains one or more sources that continuously return data as it arrives.
isStreamingDynamicAllocationEnabled(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
 
isSubClassOf(Type, Class<?>) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
isSubDir(Path, Path, FileSystem) - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
isTemporary() - 类 中的方法org.apache.spark.sql.catalog.Function
 
isTemporary() - 类 中的方法org.apache.spark.sql.catalog.Table
 
isTesting() - 类 中的静态方法org.apache.spark.util.Utils
Indicates whether Spark is currently running unit tests.
isTimingOut() - 类 中的方法org.apache.spark.streaming.State
Whether the state is timing out and going to be removed by the system after the current batch.
isTraceEnabled() - 接口 中的方法org.apache.spark.internal.Logging
 
isTransposed() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
isTransposed() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Flag that keeps track whether the matrix is transposed or not.
isTransposed() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
isTransposed() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
isTransposed() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Flag that keeps track whether the matrix is transposed or not.
isTransposed() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
isTriggerActive() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryStatus
 
isValid() - 类 中的方法org.apache.spark.ml.param.Param
 
isValid() - 类 中的方法org.apache.spark.storage.StorageLevel
 
isWindows() - 类 中的静态方法org.apache.spark.util.Utils
Whether the underlying operating system is Windows.
isZero() - 类 中的方法org.apache.spark.sql.types.Decimal
 
isZero() - 类 中的方法org.apache.spark.streaming.Duration
 
isZero() - 类 中的方法org.apache.spark.util.AccumulatorV2
Returns if this accumulator is zero value or not. e.g. for a counter accumulator, 0 is zero value; for a list accumulator, Nil is zero value.
isZero() - 类 中的方法org.apache.spark.util.CollectionAccumulator
Returns false if this accumulator instance has any values in it.
isZero() - 类 中的方法org.apache.spark.util.DoubleAccumulator
Returns false if this accumulator has had any values added to it or the sum is non-zero.
isZero() - 类 中的方法org.apache.spark.util.LongAccumulator
Returns false if this accumulator has had any values added to it or the sum is non-zero.
item() - 类 中的方法org.apache.spark.ml.recommendation.ALS.Rating
 
itemCol() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
itemCol() - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
itemCol() - 接口 中的方法org.apache.spark.ml.recommendation.ALSModelParams
Param for the column name for item ids.
itemFactors() - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
items() - 类 中的方法org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
 
itemsCol() - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
itemsCol() - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
itemsCol() - 接口 中的方法org.apache.spark.ml.fpm.FPGrowthParams
Items column name.
itemSupport() - 类 中的方法org.apache.spark.mllib.fpm.FPGrowthModel
 
iterator(Partition, TaskContext) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
iterator(Partition, TaskContext) - 类 中的方法org.apache.spark.rdd.RDD
Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
iterator() - 类 中的方法org.apache.spark.sql.types.StructType
 
iterator() - 类 中的方法org.apache.spark.status.RDDPartitionSeq
 
IV_LENGTH_IN_BYTES() - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
 

J

j() - 类 中的方法org.apache.spark.mllib.linalg.distributed.MatrixEntry
 
jarOfClass(Class<?>) - 类 中的静态方法org.apache.spark.api.java.JavaSparkContext
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.
jarOfClass(Class<?>) - 类 中的静态方法org.apache.spark.SparkContext
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.
jarOfClass(Class<?>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaStreamingContext
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to StreamingContext.
jarOfClass(Class<?>) - 类 中的静态方法org.apache.spark.streaming.StreamingContext
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to StreamingContext.
jarOfObject(Object) - 类 中的静态方法org.apache.spark.api.java.JavaSparkContext
Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext.
jarOfObject(Object) - 类 中的静态方法org.apache.spark.SparkContext
Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext.
jars() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
jars() - 类 中的方法org.apache.spark.SparkContext
 
javaAntecedent() - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules.Rule
Returns antecedent in a Java List.
javaCategoryMaps() - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
Java-friendly version of categoryMaps
javaConsequent() - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules.Rule
Returns consequent in a Java List.
JavaDoubleRDD - org.apache.spark.api.java中的类
 
JavaDoubleRDD(RDD<Object>) - 类 的构造器org.apache.spark.api.java.JavaDoubleRDD
 
JavaDStream<T> - org.apache.spark.streaming.api.java中的类
A Java-friendly interface to DStream, the basic abstraction in Spark Streaming that represents a continuous stream of data.
JavaDStream(DStream<T>, ClassTag<T>) - 类 的构造器org.apache.spark.streaming.api.java.JavaDStream
 
JavaDStreamLike<T,This extends JavaDStreamLike<T,This,R>,R extends JavaRDDLike<T,R>> - org.apache.spark.streaming.api.java中的接口
 
JavaFutureAction<T> - org.apache.spark.api.java中的接口
 
JavaHadoopRDD<K,V> - org.apache.spark.api.java中的类
 
JavaHadoopRDD(HadoopRDD<K, V>, ClassTag<K>, ClassTag<V>) - 类 的构造器org.apache.spark.api.java.JavaHadoopRDD
 
javaHome() - 类 中的方法org.apache.spark.status.api.v1.RuntimeInfo
 
JavaInputDStream<T> - org.apache.spark.streaming.api.java中的类
A Java-friendly interface to InputDStream.
JavaInputDStream(InputDStream<T>, ClassTag<T>) - 类 的构造器org.apache.spark.streaming.api.java.JavaInputDStream
 
javaItems() - 类 中的方法org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
Returns items in a Java List.
JavaIterableWrapperSerializer - org.apache.spark.serializer中的类
A Kryo serializer for serializing results returned by asJavaIterable.
JavaIterableWrapperSerializer() - 类 的构造器org.apache.spark.serializer.JavaIterableWrapperSerializer
 
JavaMapWithStateDStream<KeyType,ValueType,StateType,MappedType> - org.apache.spark.streaming.api.java中的类
DStream representing the stream of data generated by mapWithState operation on a JavaPairDStream.
JavaNewHadoopRDD<K,V> - org.apache.spark.api.java中的类
 
JavaNewHadoopRDD(NewHadoopRDD<K, V>, ClassTag<K>, ClassTag<V>) - 类 的构造器org.apache.spark.api.java.JavaNewHadoopRDD
 
javaOcvTypes() - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
(Java-specific) OpenCV type mapping supported
JavaPackage - org.apache.spark.mllib中的类
A dummy class as a workaround to show the package doc of spark.mllib in generated Java API docs.
JavaPairDStream<K,V> - org.apache.spark.streaming.api.java中的类
A Java-friendly interface to a DStream of key-value pairs, which provides extra methods like reduceByKey and join.
JavaPairDStream(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 的构造器org.apache.spark.streaming.api.java.JavaPairDStream
 
JavaPairInputDStream<K,V> - org.apache.spark.streaming.api.java中的类
A Java-friendly interface to InputDStream of key-value pairs.
JavaPairInputDStream(InputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 的构造器org.apache.spark.streaming.api.java.JavaPairInputDStream
 
JavaPairRDD<K,V> - org.apache.spark.api.java中的类
 
JavaPairRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 的构造器org.apache.spark.api.java.JavaPairRDD
 
JavaPairReceiverInputDStream<K,V> - org.apache.spark.streaming.api.java中的类
A Java-friendly interface to ReceiverInputDStream, the abstract class for defining any input stream that receives data over the network.
JavaPairReceiverInputDStream(ReceiverInputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 的构造器org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
 
JavaParams - org.apache.spark.ml.param中的类
:: DeveloperApi :: Java-friendly wrapper for Params.
JavaParams() - 类 的构造器org.apache.spark.ml.param.JavaParams
 
JavaRDD<T> - org.apache.spark.api.java中的类
 
JavaRDD(RDD<T>, ClassTag<T>) - 类 的构造器org.apache.spark.api.java.JavaRDD
 
javaRDD() - 类 中的方法org.apache.spark.sql.Dataset
Returns the content of the Dataset as a JavaRDD of Ts.
JavaRDDLike<T,This extends JavaRDDLike<T,This>> - org.apache.spark.api.java中的接口
Defines operations common to several Java RDD implementations.
JavaReceiverInputDStream<T> - org.apache.spark.streaming.api.java中的类
A Java-friendly interface to ReceiverInputDStream, the abstract class for defining any input stream that receives data over the network.
JavaReceiverInputDStream(ReceiverInputDStream<T>, ClassTag<T>) - 类 的构造器org.apache.spark.streaming.api.java.JavaReceiverInputDStream
 
javaSequence() - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
Returns sequence as a Java List of lists for Java users.
javaSerialization(ClassTag<T>) - 类 中的静态方法org.apache.spark.sql.Encoders
(Scala-specific) Creates an encoder that serializes objects of type T using generic Java serialization.
javaSerialization(Class<T>) - 类 中的静态方法org.apache.spark.sql.Encoders
Creates an encoder that serializes objects of type T using generic Java serialization.
JavaSerializer - org.apache.spark.serializer中的类
:: DeveloperApi :: A Spark serializer that uses Java's built-in serialization.
JavaSerializer(SparkConf) - 类 的构造器org.apache.spark.serializer.JavaSerializer
 
JavaSparkContext - org.apache.spark.api.java中的类
A Java-friendly version of SparkContext that returns JavaRDDs and works with Java collections instead of Scala ones.
JavaSparkContext(SparkContext) - 类 的构造器org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext() - 类 的构造器org.apache.spark.api.java.JavaSparkContext
Create a JavaSparkContext that loads settings from system properties (for instance, when launching with .
JavaSparkContext(SparkConf) - 类 的构造器org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String) - 类 的构造器org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String, SparkConf) - 类 的构造器org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String, String, String) - 类 的构造器org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String, String, String[]) - 类 的构造器org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String, String, String[], Map<String, String>) - 类 的构造器org.apache.spark.api.java.JavaSparkContext
 
JavaSparkStatusTracker - org.apache.spark.api.java中的类
Low-level status reporting APIs for monitoring job and stage progress.
JavaStreamingContext - org.apache.spark.streaming.api.java中的类
A Java-friendly version of StreamingContext which is the main entry point for Spark Streaming functionality.
JavaStreamingContext(StreamingContext) - 类 的构造器org.apache.spark.streaming.api.java.JavaStreamingContext
 
JavaStreamingContext(String, String, Duration) - 类 的构造器org.apache.spark.streaming.api.java.JavaStreamingContext
Create a StreamingContext.
JavaStreamingContext(String, String, Duration, String, String) - 类 的构造器org.apache.spark.streaming.api.java.JavaStreamingContext
Create a StreamingContext.
JavaStreamingContext(String, String, Duration, String, String[]) - 类 的构造器org.apache.spark.streaming.api.java.JavaStreamingContext
Create a StreamingContext.
JavaStreamingContext(String, String, Duration, String, String[], Map<String, String>) - 类 的构造器org.apache.spark.streaming.api.java.JavaStreamingContext
Create a StreamingContext.
JavaStreamingContext(JavaSparkContext, Duration) - 类 的构造器org.apache.spark.streaming.api.java.JavaStreamingContext
Create a JavaStreamingContext using an existing JavaSparkContext.
JavaStreamingContext(SparkConf, Duration) - 类 的构造器org.apache.spark.streaming.api.java.JavaStreamingContext
Create a JavaStreamingContext using a SparkConf configuration.
JavaStreamingContext(String) - 类 的构造器org.apache.spark.streaming.api.java.JavaStreamingContext
Recreate a JavaStreamingContext from a checkpoint file.
JavaStreamingContext(String, Configuration) - 类 的构造器org.apache.spark.streaming.api.java.JavaStreamingContext
Re-creates a JavaStreamingContext from a checkpoint file.
JavaStreamingListenerEvent - org.apache.spark.streaming.api.java中的接口
Base trait for events related to JavaStreamingListener
javaTopicAssignments() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
javaTopicDistributions() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
Java-friendly version of topicDistributions
javaTopTopicsPerDocument(int) - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
Java-friendly version of topTopicsPerDocument
javaTreeWeights() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleModel
Weights used by the python wrappers.
javaTypeToDataType(Type) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
javaTypeToDataType(Type) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
JavaUtils - org.apache.spark.api.java中的类
 
JavaUtils() - 类 的构造器org.apache.spark.api.java.JavaUtils
 
JavaUtils.SerializableMapWrapper<A,B> - org.apache.spark.api.java中的类
 
javaVersion() - 类 中的方法org.apache.spark.status.api.v1.RuntimeInfo
 
jdbc(String, String, Properties) - 类 中的方法org.apache.spark.sql.DataFrameReader
Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.
jdbc(String, String, String, long, long, int, Properties) - 类 中的方法org.apache.spark.sql.DataFrameReader
Construct a DataFrame representing the database table accessible via JDBC URL url named table.
jdbc(String, String, String[], Properties) - 类 中的方法org.apache.spark.sql.DataFrameReader
Construct a DataFrame representing the database table accessible via JDBC URL url named table using connection properties.
jdbc(String, String, Properties) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame to an external database table via JDBC.
JdbcDialect - org.apache.spark.sql.jdbc中的类
:: DeveloperApi :: Encapsulates everything (extensions, workarounds, quirks) to handle the SQL dialect of a certain database or jdbc driver.
JdbcDialect() - 类 的构造器org.apache.spark.sql.jdbc.JdbcDialect
 
JdbcDialects - org.apache.spark.sql.jdbc中的类
:: DeveloperApi :: Registry of dialects that apply to every new jdbc org.apache.spark.sql.DataFrame.
JdbcDialects() - 类 的构造器org.apache.spark.sql.jdbc.JdbcDialects
 
jdbcNullType() - 类 中的方法org.apache.spark.sql.jdbc.JdbcType
 
JdbcRDD<T> - org.apache.spark.rdd中的类
An RDD that executes a SQL query on a JDBC connection and reads results.
JdbcRDD(SparkContext, Function0<Connection>, String, long, long, int, Function1<ResultSet, T>, ClassTag<T>) - 类 的构造器org.apache.spark.rdd.JdbcRDD
 
JdbcRDD.ConnectionFactory - org.apache.spark.rdd中的接口
 
JdbcType - org.apache.spark.sql.jdbc中的类
:: DeveloperApi :: A database type definition coupled with the jdbc type needed to send null values to the database.
JdbcType(String, int) - 类 的构造器org.apache.spark.sql.jdbc.JdbcType
 
JettyUtils - org.apache.spark.ui中的类
Utilities for launching a web server using Jetty's HTTP Server class
JettyUtils() - 类 的构造器org.apache.spark.ui.JettyUtils
 
JettyUtils.ServletParams<T> - org.apache.spark.ui中的类
 
JettyUtils.ServletParams$ - org.apache.spark.ui中的类
 
JOB_DAG() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
JOB_TIMELINE() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
JobData - org.apache.spark.status.api.v1中的类
 
jobEndFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
jobEndToJson(SparkListenerJobEnd) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
JobExecutionStatus - org.apache.spark中的枚举
 
jobFailed(Exception) - 接口 中的方法org.apache.spark.scheduler.JobListener
 
JobGeneratorEvent - org.apache.spark.streaming.scheduler中的接口
Event classes for JobGenerator
jobGroup() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
jobId() - 类 中的方法org.apache.spark.scheduler.SparkListenerJobEnd
 
jobId() - 类 中的方法org.apache.spark.scheduler.SparkListenerJobStart
 
jobId() - 接口 中的方法org.apache.spark.SparkJobInfo
 
jobId() - 类 中的方法org.apache.spark.SparkJobInfoImpl
 
jobId() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
jobId() - 类 中的方法org.apache.spark.status.LiveJob
 
jobID() - 类 中的方法org.apache.spark.TaskCommitDenied
 
jobIds() - 接口 中的方法org.apache.spark.api.java.JavaFutureAction
Returns the job IDs run by the underlying async operation.
jobIds() - 类 中的方法org.apache.spark.ComplexFutureAction
 
jobIds() - 接口 中的方法org.apache.spark.FutureAction
Returns the job IDs run by the underlying async operation.
jobIds() - 类 中的方法org.apache.spark.SimpleFutureAction
 
jobIds() - 类 中的方法org.apache.spark.status.api.v1.streaming.OutputOperationInfo
 
jobIds() - 类 中的方法org.apache.spark.status.LiveStage
 
JobListener - org.apache.spark.scheduler中的接口
Interface used to listen for job completion or failure events after submitting a job to the DAGScheduler.
JobResult - org.apache.spark.scheduler中的接口
:: DeveloperApi :: A result of a job in the DAGScheduler.
jobResult() - 类 中的方法org.apache.spark.scheduler.SparkListenerJobEnd
 
jobResultFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
jobResultToJson(JobResult) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
jobs() - 类 中的方法org.apache.spark.status.LiveStage
 
JobSchedulerEvent - org.apache.spark.streaming.scheduler中的接口
 
jobStartFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
jobStartToJson(SparkListenerJobStart) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
JobSubmitter - org.apache.spark中的接口
Handle via which a "run" function passed to a ComplexFutureAction can submit jobs for execution.
JobSucceeded - org.apache.spark.scheduler中的类
 
JobSucceeded() - 类 的构造器org.apache.spark.scheduler.JobSucceeded
 
join(JavaPairRDD<K, W>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD containing all pairs of elements with matching keys in this and other.
join(JavaPairRDD<K, W>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD containing all pairs of elements with matching keys in this and other.
join(JavaPairRDD<K, W>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD containing all pairs of elements with matching keys in this and other.
join(RDD<Tuple2<K, W>>, Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return an RDD containing all pairs of elements with matching keys in this and other.
join(RDD<Tuple2<K, W>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return an RDD containing all pairs of elements with matching keys in this and other.
join(RDD<Tuple2<K, W>>, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return an RDD containing all pairs of elements with matching keys in this and other.
join(Dataset<?>) - 类 中的方法org.apache.spark.sql.Dataset
Join with another DataFrame.
join(Dataset<?>, String) - 类 中的方法org.apache.spark.sql.Dataset
Inner equi-join with another DataFrame using the given column.
join(Dataset<?>, Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Inner equi-join with another DataFrame using the given columns.
join(Dataset<?>, Seq<String>, String) - 类 中的方法org.apache.spark.sql.Dataset
Equi-join with another DataFrame using the given columns.
join(Dataset<?>, Column) - 类 中的方法org.apache.spark.sql.Dataset
Inner join with another DataFrame, using the given join expression.
join(Dataset<?>, Column, String) - 类 中的方法org.apache.spark.sql.Dataset
Join with another DataFrame, using the given join expression.
join(JavaPairDStream<K, W>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(JavaPairDStream<K, W>, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(JavaPairDStream<K, W>, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(DStream<Tuple2<K, W>>, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(DStream<Tuple2<K, W>>, int, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
joinVertices(RDD<Tuple2<Object, U>>, Function3<Object, VD, U, VD>, ClassTag<U>) - 类 中的方法org.apache.spark.graphx.GraphOps
Join the vertices with an RDD and then apply a function from the vertex and RDD entry to a new vertex value.
joinWith(Dataset<U>, Column, String) - 类 中的方法org.apache.spark.sql.Dataset
Joins this Dataset returning a Tuple2 for each pair where condition evaluates to true.
joinWith(Dataset<U>, Column) - 类 中的方法org.apache.spark.sql.Dataset
Using inner equi-join to join this Dataset returning a Tuple2 for each pair where condition evaluates to true.
json() - 类 中的方法org.apache.spark.sql.connector.read.streaming.Offset
A JSON-serialized representation of an Offset that is used for saving offsets to the offset log.
json(String...) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads JSON files and returns the results as a DataFrame.
json(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads a JSON file and returns the results as a DataFrame.
json(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads JSON files and returns the results as a DataFrame.
json(JavaRDD<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
已过时。
Use json(Dataset[String]) instead. Since 2.2.0.
json(RDD<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
已过时。
Use json(Dataset[String]) instead. Since 2.2.0.
json(Dataset<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads a Dataset[String] storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as a DataFrame.
json(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame in JSON format ( JSON Lines text format or newline-delimited JSON) at the specified path.
json() - 接口 中的方法org.apache.spark.sql.Row
The compact JSON representation of this row.
json(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Loads a JSON file stream and returns the results as a DataFrame.
json() - 类 中的方法org.apache.spark.sql.streaming.SinkProgress
The compact JSON representation of this progress.
json() - 类 中的方法org.apache.spark.sql.streaming.SourceProgress
The compact JSON representation of this progress.
json() - 类 中的方法org.apache.spark.sql.streaming.StateOperatorProgress
The compact JSON representation of this progress.
json() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
The compact JSON representation of this progress.
json() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryStatus
The compact JSON representation of this status.
json() - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
json() - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
json() - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
json() - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
json() - 类 中的方法org.apache.spark.sql.types.DataType
The compact JSON representation of this data type.
json() - 类 中的静态方法org.apache.spark.sql.types.DateType
 
json() - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
json() - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
json() - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
json() - 类 中的静态方法org.apache.spark.sql.types.LongType
 
json() - 类 中的方法org.apache.spark.sql.types.Metadata
Converts to its JSON representation.
json() - 类 中的静态方法org.apache.spark.sql.types.NullType
 
json() - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
json() - 类 中的静态方法org.apache.spark.sql.types.StringType
 
json() - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
json_tuple(Column, String...) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new row for a json column according to the given field names.
json_tuple(Column, Seq<String>) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new row for a json column according to the given field names.
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.BooleanParam
 
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.DoubleArrayArrayParam
 
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.DoubleArrayParam
 
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.DoubleParam
 
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.FloatParam
 
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.IntArrayParam
 
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.IntParam
 
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.LongParam
 
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.Param
Decodes a param value from JSON.
jsonDecode(String) - 类 中的方法org.apache.spark.ml.param.StringArrayParam
 
jsonEncode(boolean) - 类 中的方法org.apache.spark.ml.param.BooleanParam
 
jsonEncode(double[][]) - 类 中的方法org.apache.spark.ml.param.DoubleArrayArrayParam
 
jsonEncode(double[]) - 类 中的方法org.apache.spark.ml.param.DoubleArrayParam
 
jsonEncode(double) - 类 中的方法org.apache.spark.ml.param.DoubleParam
 
jsonEncode(float) - 类 中的方法org.apache.spark.ml.param.FloatParam
 
jsonEncode(int[]) - 类 中的方法org.apache.spark.ml.param.IntArrayParam
 
jsonEncode(int) - 类 中的方法org.apache.spark.ml.param.IntParam
 
jsonEncode(long) - 类 中的方法org.apache.spark.ml.param.LongParam
 
jsonEncode(T) - 类 中的方法org.apache.spark.ml.param.Param
Encodes a param value into JSON, which can be decoded by `jsonDecode()`.
jsonEncode(String[]) - 类 中的方法org.apache.spark.ml.param.StringArrayParam
 
JsonMatrixConverter - org.apache.spark.ml.linalg中的类
 
JsonMatrixConverter() - 类 的构造器org.apache.spark.ml.linalg.JsonMatrixConverter
 
JsonProtocol - org.apache.spark.util中的类
Serializes SparkListener events to/from JSON.
JsonProtocol() - 类 的构造器org.apache.spark.util.JsonProtocol
 
jsonResponderToServlet(Function1<HttpServletRequest, JsonAST.JValue>) - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
jsonValue() - 接口 中的方法org.apache.spark.sql.Row
JSON representation of the row.
JsonVectorConverter - org.apache.spark.ml.linalg中的类
 
JsonVectorConverter() - 类 的构造器org.apache.spark.ml.linalg.JsonVectorConverter
 
jValueDecode(JsonAST.JValue) - 类 中的静态方法org.apache.spark.ml.param.DoubleParam
Decodes a param value from JValue.
jValueDecode(JsonAST.JValue) - 类 中的静态方法org.apache.spark.ml.param.FloatParam
Decodes a param value from JValue.
jValueEncode(double) - 类 中的静态方法org.apache.spark.ml.param.DoubleParam
Encodes a param value into JValue.
jValueEncode(float) - 类 中的静态方法org.apache.spark.ml.param.FloatParam
Encodes a param value into JValue.
JVM_GC_TIME() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
jvmGcTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
jvmGcTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
jvmGcTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
JVMHeapMemory - org.apache.spark.metrics中的类
 
JVMHeapMemory() - 类 的构造器org.apache.spark.metrics.JVMHeapMemory
 
JVMOffHeapMemory - org.apache.spark.metrics中的类
 
JVMOffHeapMemory() - 类 的构造器org.apache.spark.metrics.JVMOffHeapMemory
 

K

k() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
k() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
k() - 接口 中的方法org.apache.spark.ml.clustering.BisectingKMeansParams
The desired number of leaf clusters.
k() - 类 中的方法org.apache.spark.ml.clustering.ClusteringSummary
 
k() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
k() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
k() - 接口 中的方法org.apache.spark.ml.clustering.GaussianMixtureParams
Number of independent Gaussians in the mixture model.
k() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
k() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
k() - 接口 中的方法org.apache.spark.ml.clustering.KMeansParams
The number of clusters to create (k).
k() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
k() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
k() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
Param for the number of topics (clusters) to infer.
k() - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
k() - 接口 中的方法org.apache.spark.ml.clustering.PowerIterationClusteringParams
The number of clusters to create (k).
k() - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
k() - 类 中的方法org.apache.spark.ml.feature.PCA
 
k() - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
k() - 接口 中的方法org.apache.spark.ml.feature.PCAParams
The number of principal components.
k() - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
 
k() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
k() - 类 中的方法org.apache.spark.mllib.clustering.ExpectationSum
 
k() - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixtureModel
Number of gaussians in mixture
k() - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel
Total number of clusters.
k() - 类 中的方法org.apache.spark.mllib.clustering.LDAModel
Number of topics
k() - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
 
k() - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClusteringModel
 
k() - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
 
k() - 类 中的方法org.apache.spark.mllib.feature.PCA
 
k() - 类 中的方法org.apache.spark.mllib.feature.PCAModel
 
K_MEANS_PARALLEL() - 类 中的静态方法org.apache.spark.mllib.clustering.KMeans
 
KafkaRedactionUtil - org.apache.spark.kafka010中的类
 
KafkaRedactionUtil() - 类 的构造器org.apache.spark.kafka010.KafkaRedactionUtil
 
KafkaTokenSparkConf - org.apache.spark.kafka010中的类
 
KafkaTokenSparkConf() - 类 的构造器org.apache.spark.kafka010.KafkaTokenSparkConf
 
KafkaTokenUtil - org.apache.spark.kafka010中的类
 
KafkaTokenUtil() - 类 的构造器org.apache.spark.kafka010.KafkaTokenUtil
 
kClassTag() - 类 中的方法org.apache.spark.api.java.JavaHadoopRDD
 
kClassTag() - 类 中的方法org.apache.spark.api.java.JavaNewHadoopRDD
 
kClassTag() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
 
kClassTag() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairInputDStream
 
kClassTag() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
 
keepLastCheckpoint() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
keepLastCheckpoint() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
keepLastCheckpoint() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
For EM optimizer only: optimizer = "em".
KERBEROS_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.History
 
KERBEROS_KEYTAB() - 类 中的静态方法org.apache.spark.internal.config.History
 
KERBEROS_PRINCIPAL() - 类 中的静态方法org.apache.spark.internal.config.History
 
KernelDensity - org.apache.spark.mllib.stat中的类
Kernel density estimation.
KernelDensity() - 类 的构造器org.apache.spark.mllib.stat.KernelDensity
 
keyArray() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarMap
 
keyAs(Encoder<L>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Returns a new KeyValueGroupedDataset where the type of the key has been mapped to the specified type.
keyBy(Function<T, U>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Creates tuples of the elements in this RDD by applying f.
keyBy(Function1<T, K>) - 类 中的方法org.apache.spark.rdd.RDD
Creates tuples of the elements in this RDD by applying f.
keyOrdering() - 类 中的方法org.apache.spark.ShuffleDependency
 
keyPrefix() - 接口 中的方法org.apache.spark.sql.connector.catalog.SessionConfigSupport
Key prefix of the session configs to propagate, which is usually the data source name.
keys() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD with the keys of each tuple.
keys() - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return an RDD with the keys of each tuple.
keys() - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Returns a Dataset that contains each unique key.
keySet() - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
keyType() - 类 中的方法org.apache.spark.sql.types.MapType
 
KeyValueGroupedDataset<K,V> - org.apache.spark.sql中的类
A Dataset has been logically grouped by a user specified grouping key.
kFold(RDD<T>, int, int, ClassTag<T>) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Return a k element array of pairs of RDDs with the first element of each pair containing the training data, a complement of the validation data and the second element, the validation data, containing a unique 1/kth of the data.
kFold(RDD<T>, int, long, ClassTag<T>) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Version of kFold() taking a Long seed.
kill() - 接口 中的方法org.apache.spark.launcher.SparkAppHandle
Tries to kill the underlying application.
killAllTaskAttempts(int, boolean, String) - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
killed() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
KILLED() - 类 中的静态方法org.apache.spark.TaskState
 
killedSummary() - 类 中的方法org.apache.spark.status.LiveJob
 
killedSummary() - 类 中的方法org.apache.spark.status.LiveStage
 
killedTasks() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
killedTasks() - 类 中的方法org.apache.spark.status.LiveExecutorStageSummary
 
killedTasks() - 类 中的方法org.apache.spark.status.LiveJob
 
killedTasks() - 类 中的方法org.apache.spark.status.LiveStage
 
killedTasksSummary() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
killedTasksSummary() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
killExecutor(String) - 接口 中的方法org.apache.spark.ExecutorAllocationClient
Request that the cluster manager kill the specified executor.
killExecutor(String) - 类 中的方法org.apache.spark.SparkContext
:: DeveloperApi :: Request that the cluster manager kill the specified executor.
killExecutors(Seq<String>, boolean, boolean, boolean) - 接口 中的方法org.apache.spark.ExecutorAllocationClient
Request that the cluster manager kill the specified executors.
KillExecutors(Seq<String>) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
 
killExecutors(Seq<String>) - 类 中的方法org.apache.spark.SparkContext
:: DeveloperApi :: Request that the cluster manager kill the specified executors.
KillExecutors$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors$
 
killExecutorsOnHost(String) - 接口 中的方法org.apache.spark.ExecutorAllocationClient
Request that the cluster manager kill every executor on the specified host.
KillExecutorsOnHost(String) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
 
KillExecutorsOnHost$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost$
 
KillTask(long, String, boolean, String) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
 
KillTask - org.apache.spark.scheduler.local中的类
 
KillTask(long, boolean, String) - 类 的构造器org.apache.spark.scheduler.local.KillTask
 
killTask(long, String, boolean, String) - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
Requests that an executor kills a running task.
KillTask$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask$
 
killTaskAttempt(long, boolean, String) - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
Kills a task attempt.
killTaskAttempt(long, boolean, String) - 类 中的方法org.apache.spark.SparkContext
Kill and reschedule the given task attempt.
KinesisDataGenerator - org.apache.spark.streaming.kinesis中的接口
A wrapper interface that will allow us to consolidate the code for synthetic data generation.
KinesisInitialPositions - org.apache.spark.streaming.kinesis中的类
 
KinesisInitialPositions() - 类 的构造器org.apache.spark.streaming.kinesis.KinesisInitialPositions
 
KinesisInitialPositions.AtTimestamp - org.apache.spark.streaming.kinesis中的类
 
KinesisInitialPositions.Latest - org.apache.spark.streaming.kinesis中的类
 
KinesisInitialPositions.TrimHorizon - org.apache.spark.streaming.kinesis中的类
 
KinesisUtilsPythonHelper - org.apache.spark.streaming.kinesis中的类
This is a helper class that wraps the methods in KinesisUtils into more Python-friendly class and function so that it can be easily instantiated and called from Python's KinesisUtils.
KinesisUtilsPythonHelper() - 类 的构造器org.apache.spark.streaming.kinesis.KinesisUtilsPythonHelper
 
kManifest() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
 
KMeans - org.apache.spark.ml.clustering中的类
K-means clustering with support for k-means|| initialization proposed by Bahmani et al.
KMeans(String) - 类 的构造器org.apache.spark.ml.clustering.KMeans
 
KMeans() - 类 的构造器org.apache.spark.ml.clustering.KMeans
 
KMeans - org.apache.spark.mllib.clustering中的类
K-means clustering with a k-means++ like initialization mode (the k-means|| algorithm by Bahmani et al).
KMeans() - 类 的构造器org.apache.spark.mllib.clustering.KMeans
Constructs a KMeans instance with default parameters: {k: 2, maxIterations: 20, initializationMode: "k-means||", initializationSteps: 2, epsilon: 1e-4, seed: random, distanceMeasure: "euclidean"}.
KMeansDataGenerator - org.apache.spark.mllib.util中的类
:: DeveloperApi :: Generate test data for KMeans.
KMeansDataGenerator() - 类 的构造器org.apache.spark.mllib.util.KMeansDataGenerator
 
KMeansModel - org.apache.spark.ml.clustering中的类
Model fitted by KMeans.
KMeansModel - org.apache.spark.mllib.clustering中的类
A clustering model for K-means.
KMeansModel(Vector[], String, double, int) - 类 的构造器org.apache.spark.mllib.clustering.KMeansModel
 
KMeansModel(Vector[]) - 类 的构造器org.apache.spark.mllib.clustering.KMeansModel
 
KMeansModel(Iterable<Vector>) - 类 的构造器org.apache.spark.mllib.clustering.KMeansModel
A Java-friendly constructor that takes an Iterable of Vectors.
KMeansModel.SaveLoadV1_0$ - org.apache.spark.mllib.clustering中的类
 
KMeansModel.SaveLoadV2_0$ - org.apache.spark.mllib.clustering中的类
 
KMeansParams - org.apache.spark.ml.clustering中的接口
Common params for KMeans and KMeansModel
kMeansPlusPlus(int, VectorWithNorm[], double[], int, int) - 类 中的静态方法org.apache.spark.mllib.clustering.LocalKMeans
Run K-means++ on the weighted point set points.
KMeansSummary - org.apache.spark.ml.clustering中的类
Summary of KMeans.
KnownSizeEstimation - org.apache.spark.util中的接口
A trait that allows a class to give SizeEstimator more accurate size estimation.
KolmogorovSmirnovTest - org.apache.spark.ml.stat中的类
Conduct the two-sided Kolmogorov Smirnov (KS) test for data sampled from a continuous distribution.
KolmogorovSmirnovTest() - 类 的构造器org.apache.spark.ml.stat.KolmogorovSmirnovTest
 
kolmogorovSmirnovTest(RDD<Object>, String, double...) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Convenience function to conduct a one-sample, two-sided Kolmogorov-Smirnov test for probability distribution equality.
kolmogorovSmirnovTest(JavaDoubleRDD, String, double...) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Java-friendly version of kolmogorovSmirnovTest()
kolmogorovSmirnovTest(RDD<Object>, Function1<Object, Object>) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Conduct the two-sided Kolmogorov-Smirnov (KS) test for data sampled from a continuous distribution.
kolmogorovSmirnovTest(RDD<Object>, String, Seq<Object>) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Convenience function to conduct a one-sample, two-sided Kolmogorov-Smirnov test for probability distribution equality.
kolmogorovSmirnovTest(JavaDoubleRDD, String, Seq<Object>) - 类 中的静态方法org.apache.spark.mllib.stat.Statistics
Java-friendly version of kolmogorovSmirnovTest()
KolmogorovSmirnovTest - org.apache.spark.mllib.stat.test中的类
Conduct the two-sided Kolmogorov Smirnov (KS) test for data sampled from a continuous distribution.
KolmogorovSmirnovTest() - 类 的构造器org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
 
KolmogorovSmirnovTest.NullHypothesis$ - org.apache.spark.mllib.stat.test中的类
 
KolmogorovSmirnovTestResult - org.apache.spark.mllib.stat.test中的类
Object containing the test results for the Kolmogorov-Smirnov test.
Kryo - org.apache.spark.internal.config中的类
 
Kryo() - 类 的构造器org.apache.spark.internal.config.Kryo
 
kryo(ClassTag<T>) - 类 中的静态方法org.apache.spark.sql.Encoders
(Scala-specific) Creates an encoder that serializes objects of type T using Kryo.
kryo(Class<T>) - 类 中的静态方法org.apache.spark.sql.Encoders
Creates an encoder that serializes objects of type T using Kryo.
KRYO_CLASSES_TO_REGISTER() - 类 中的静态方法org.apache.spark.internal.config.Kryo
 
KRYO_REFERENCE_TRACKING() - 类 中的静态方法org.apache.spark.internal.config.Kryo
 
KRYO_REGISTRATION_REQUIRED() - 类 中的静态方法org.apache.spark.internal.config.Kryo
 
KRYO_SERIALIZER_BUFFER_SIZE() - 类 中的静态方法org.apache.spark.internal.config.Kryo
 
KRYO_SERIALIZER_MAX_BUFFER_SIZE() - 类 中的静态方法org.apache.spark.internal.config.Kryo
 
KRYO_USE_POOL() - 类 中的静态方法org.apache.spark.internal.config.Kryo
 
KRYO_USE_UNSAFE() - 类 中的静态方法org.apache.spark.internal.config.Kryo
 
KRYO_USER_REGISTRATORS() - 类 中的静态方法org.apache.spark.internal.config.Kryo
 
KryoRegistrator - org.apache.spark.serializer中的接口
Interface implemented by clients to register their classes with Kryo when using Kryo serialization.
KryoSerializer - org.apache.spark.serializer中的类
A Spark serializer that uses the Kryo serialization library.
KryoSerializer(SparkConf) - 类 的构造器org.apache.spark.serializer.KryoSerializer
 
kurtosis(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the kurtosis of the values in a group.
kurtosis(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the kurtosis of the values in a group.
KVUtils - org.apache.spark.status中的类
 
KVUtils() - 类 的构造器org.apache.spark.status.KVUtils
 

L

L1Updater - org.apache.spark.mllib.optimization中的类
:: DeveloperApi :: Updater for L1 regularized problems.
L1Updater() - 类 的构造器org.apache.spark.mllib.optimization.L1Updater
 
label() - 类 中的方法org.apache.spark.ml.feature.LabeledPoint
 
label() - 类 中的方法org.apache.spark.mllib.regression.LabeledPoint
 
labelCol() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Field in "predictions" which gives the true label of each instance (if available).
labelCol() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
 
labelCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
labelCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
labelCol() - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
labelCol() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
labelCol() - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
labelCol() - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
labelCol() - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
labelCol() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
labelCol() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
labelCol() - 类 中的方法org.apache.spark.ml.feature.RFormula
 
labelCol() - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
labelCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasLabelCol
Param for label column name.
labelCol() - 类 中的方法org.apache.spark.ml.PredictionModel
 
labelCol() - 类 中的方法org.apache.spark.ml.Predictor
 
labelCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
labelCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
labelCol() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
labelCol() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
labelCol() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
LabeledPoint - org.apache.spark.ml.feature中的类
Class that represents the features and label of a data point.
LabeledPoint(double, Vector) - 类 的构造器org.apache.spark.ml.feature.LabeledPoint
 
LabeledPoint - org.apache.spark.mllib.regression中的类
Class that represents the features and labels of a data point.
LabeledPoint(double, Vector) - 类 的构造器org.apache.spark.mllib.regression.LabeledPoint
 
LabelPropagation - org.apache.spark.graphx.lib中的类
Label Propagation algorithm.
LabelPropagation() - 类 的构造器org.apache.spark.graphx.lib.LabelPropagation
 
labels() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns the sequence of labels in ascending order.
labels() - 类 中的方法org.apache.spark.ml.feature.IndexToString
Optional param for array of labels specifying index-string mapping.
labels() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
已过时。
`labels` is deprecated and will be removed in 3.1.0. Use `labelsArray` instead. Since 3.0.0.
labels() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel
 
labels() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
 
labels() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
 
labels() - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
 
labels() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
 
labelsArray() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
lag(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the value that is offset rows before the current row, and null if there is less than offset rows before the current row.
lag(String, int) - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the value that is offset rows before the current row, and null if there is less than offset rows before the current row.
lag(String, int, Object) - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the value that is offset rows before the current row, and defaultValue if there is less than offset rows before the current row.
lag(Column, int, Object) - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the value that is offset rows before the current row, and defaultValue if there is less than offset rows before the current row.
LassoModel - org.apache.spark.mllib.regression中的类
Regression model trained using Lasso.
LassoModel(Vector, double) - 类 的构造器org.apache.spark.mllib.regression.LassoModel
 
LassoWithSGD - org.apache.spark.mllib.regression中的类
Train a regression model with L1-regularization using Stochastic Gradient Descent.
last(Column, boolean) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the last value in a group.
last(String, boolean) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the last value of the column in a group.
last(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the last value in a group.
last(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the last value of the column in a group.
last_day(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the last day of the month which the given date belongs to.
lastDir() - 类 中的方法org.apache.spark.mllib.optimization.NNLS.Workspace
 
lastError() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
lastError() - 类 中的方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
lastErrorMessage() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
lastErrorMessage() - 类 中的方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
lastErrorTime() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
lastErrorTime() - 类 中的方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
lastProgress() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Returns the most recent StreamingQueryProgress update of this streaming query.
lastStageNameAndDescription(org.apache.spark.status.AppStatusStore, JobData) - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
lastUpdate() - 类 中的方法org.apache.spark.status.LiveRDDDistribution
 
lastUpdated() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
Latest() - 类 的构造器org.apache.spark.streaming.kinesis.KinesisInitialPositions.Latest
 
latestModel() - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Return the latest model.
latestModel() - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearAlgorithm
Return the latest model.
latestOffset() - 接口 中的方法org.apache.spark.sql.connector.read.streaming.MicroBatchStream
Returns the most recent offset available.
launch() - 类 中的方法org.apache.spark.launcher.SparkLauncher
Launches a sub-process that will start the configured Spark application.
LAUNCH_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
LAUNCHING() - 类 中的静态方法org.apache.spark.TaskState
 
LaunchTask(org.apache.spark.util.SerializableBuffer) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
 
LaunchTask$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask$
 
launchTime() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
launchTime() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
Layer - org.apache.spark.ml.ann中的接口
Trait that holds Layer properties, that are needed to instantiate it.
LayerModel - org.apache.spark.ml.ann中的接口
Trait that holds Layer weights (or parameters).
layerModels() - 接口 中的方法org.apache.spark.ml.ann.TopologyModel
Array of layer models
layers() - 接口 中的方法org.apache.spark.ml.ann.TopologyModel
Array of layers
layers() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
layers() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
layers() - 接口 中的方法org.apache.spark.ml.classification.MultilayerPerceptronParams
Layer sizes including input size and output size.
LBFGS - org.apache.spark.mllib.optimization中的类
:: DeveloperApi :: Class used to solve an optimization problem using Limited-memory BFGS.
LBFGS(Gradient, Updater) - 类 的构造器org.apache.spark.mllib.optimization.LBFGS
 
LDA - org.apache.spark.ml.clustering中的类
Latent Dirichlet Allocation (LDA), a topic model designed for text documents.
LDA(String) - 类 的构造器org.apache.spark.ml.clustering.LDA
 
LDA() - 类 的构造器org.apache.spark.ml.clustering.LDA
 
LDA - org.apache.spark.mllib.clustering中的类
Latent Dirichlet Allocation (LDA), a topic model designed for text documents.
LDA() - 类 的构造器org.apache.spark.mllib.clustering.LDA
Constructs a LDA instance with default parameters.
LDAModel - org.apache.spark.ml.clustering中的类
Model fitted by LDA.
LDAModel - org.apache.spark.mllib.clustering中的类
Latent Dirichlet Allocation (LDA) model.
LDAOptimizer - org.apache.spark.mllib.clustering中的接口
:: DeveloperApi :: An LDAOptimizer specifies which optimization/learning/inference algorithm to use, and it can hold optimizer-specific parameters for users to set.
LDAParams - org.apache.spark.ml.clustering中的接口
 
LDAUtils - org.apache.spark.mllib.clustering中的类
Utility methods for LDA.
LDAUtils() - 类 的构造器org.apache.spark.mllib.clustering.LDAUtils
 
lead(String, int) - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the value that is offset rows after the current row, and null if there is less than offset rows after the current row.
lead(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the value that is offset rows after the current row, and null if there is less than offset rows after the current row.
lead(String, int, Object) - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the value that is offset rows after the current row, and defaultValue if there is less than offset rows after the current row.
lead(Column, int, Object) - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the value that is offset rows after the current row, and defaultValue if there is less than offset rows after the current row.
leafCol() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
leafCol() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
leafCol() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
leafCol() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
leafCol() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
leafCol() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
leafCol() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
leafCol() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
leafCol() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
leafCol() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
leafCol() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
leafCol() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
leafCol() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
Leaf indices column name.
leafIterator(Node) - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeModel
 
LeafNode - org.apache.spark.ml.tree中的类
Decision tree leaf node.
learningDecay() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
learningDecay() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
learningDecay() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
For Online optimizer only: optimizer = "online".
learningOffset() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
learningOffset() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
learningOffset() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
For Online optimizer only: optimizer = "online".
learningRate() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
least(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Returns the least value of the list of values, skipping null values.
least(String, String...) - 类 中的静态方法org.apache.spark.sql.functions
Returns the least value of the list of column names, skipping null values.
least(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns the least value of the list of values, skipping null values.
least(String, Seq<String>) - 类 中的静态方法org.apache.spark.sql.functions
Returns the least value of the list of column names, skipping null values.
LeastSquaresGradient - org.apache.spark.mllib.optimization中的类
:: DeveloperApi :: Compute gradient and loss for a Least-squared loss function, as used in linear regression.
LeastSquaresGradient() - 类 的构造器org.apache.spark.mllib.optimization.LeastSquaresGradient
 
left() - 类 中的方法org.apache.spark.sql.sources.And
 
left() - 类 中的方法org.apache.spark.sql.sources.Or
 
leftCategories() - 类 中的方法org.apache.spark.ml.tree.CategoricalSplit
Get sorted categories which split to the left
leftCategoriesOrThreshold() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
 
leftChild() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
leftChild() - 类 中的方法org.apache.spark.ml.tree.InternalNode
 
leftChildIndex(int) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Return the index of the left child of this node.
leftImpurity() - 类 中的方法org.apache.spark.mllib.tree.model.InformationGainStats
 
leftJoin(RDD<Tuple2<Object, VD2>>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
leftJoin(RDD<Tuple2<Object, VD2>>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - 类 中的方法org.apache.spark.graphx.VertexRDD
Left joins this VertexRDD with an RDD containing vertex attribute pairs.
leftNode() - 类 中的方法org.apache.spark.mllib.tree.model.Node
 
leftNodeId() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
leftOuterJoin(JavaPairRDD<K, W>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Perform a left outer join of this and other.
leftOuterJoin(JavaPairRDD<K, W>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Perform a left outer join of this and other.
leftOuterJoin(JavaPairRDD<K, W>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Perform a left outer join of this and other.
leftOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Perform a left outer join of this and other.
leftOuterJoin(RDD<Tuple2<K, W>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Perform a left outer join of this and other.
leftOuterJoin(RDD<Tuple2<K, W>>, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Perform a left outer join of this and other.
leftOuterJoin(JavaPairDStream<K, W>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.
leftOuterJoin(JavaPairDStream<K, W>, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.
leftOuterJoin(JavaPairDStream<K, W>, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.
leftOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.
leftOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.
leftOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.
leftPredict() - 类 中的方法org.apache.spark.mllib.tree.model.InformationGainStats
 
leftZipJoin(VertexRDD<VD2>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
leftZipJoin(VertexRDD<VD2>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - 类 中的方法org.apache.spark.graphx.VertexRDD
Left joins this RDD with another VertexRDD with the same index.
length() - 类 中的方法org.apache.spark.scheduler.SplitInfo
 
length(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the character length of a given string or number of bytes of a binary string.
length() - 接口 中的方法org.apache.spark.sql.Row
Number of elements in the Row.
length() - 类 中的方法org.apache.spark.sql.types.CharType
 
length() - 类 中的方法org.apache.spark.sql.types.HiveStringType
 
length() - 类 中的方法org.apache.spark.sql.types.StructType
 
length() - 类 中的方法org.apache.spark.sql.types.VarcharType
 
length() - 类 中的方法org.apache.spark.status.RDDPartitionSeq
 
leq(Object) - 类 中的方法org.apache.spark.sql.Column
Less than or equal to.
less(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
less(Time) - 类 中的方法org.apache.spark.streaming.Time
 
lessEq(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
lessEq(Time) - 类 中的方法org.apache.spark.streaming.Time
 
LessThan - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to a value less than value.
LessThan(String, Object) - 类 的构造器org.apache.spark.sql.sources.LessThan
 
LessThanOrEqual - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to a value less than or equal to value.
LessThanOrEqual(String, Object) - 类 的构造器org.apache.spark.sql.sources.LessThanOrEqual
 
levenshtein(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the Levenshtein distance of the two given string columns.
libraryPathEnvName() - 类 中的静态方法org.apache.spark.util.Utils
Return the current system LD_LIBRARY_PATH name
libraryPathEnvPrefix(Seq<String>) - 类 中的静态方法org.apache.spark.util.Utils
Return the prefix of a command that appends the given library paths to the system-specific library path environment variable.
LibSVMDataSource - org.apache.spark.ml.source.libsvm中的类
libsvm package implements Spark SQL data source API for loading LIBSVM data as DataFrame.
LibSVMDataSource() - 类 的构造器org.apache.spark.ml.source.libsvm.LibSVMDataSource
 
lift() - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules.Rule
Returns the lift of the rule.
like(String) - 类 中的方法org.apache.spark.sql.Column
SQL like expression.
limit(int) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by taking the first n rows.
line() - 异常错误 中的方法org.apache.spark.sql.AnalysisException
 
LinearDataGenerator - org.apache.spark.mllib.util中的类
:: DeveloperApi :: Generate sample data used for Linear Data.
LinearDataGenerator() - 类 的构造器org.apache.spark.mllib.util.LinearDataGenerator
 
LinearRegression - org.apache.spark.ml.regression中的类
Linear regression.
LinearRegression(String) - 类 的构造器org.apache.spark.ml.regression.LinearRegression
 
LinearRegression() - 类 的构造器org.apache.spark.ml.regression.LinearRegression
 
LinearRegressionModel - org.apache.spark.ml.regression中的类
Model produced by LinearRegression.
LinearRegressionModel - org.apache.spark.mllib.regression中的类
Regression model trained using LinearRegression.
LinearRegressionModel(Vector, double) - 类 的构造器org.apache.spark.mllib.regression.LinearRegressionModel
 
LinearRegressionParams - org.apache.spark.ml.regression中的接口
Params for linear regression.
LinearRegressionSummary - org.apache.spark.ml.regression中的类
Linear regression results evaluated on a dataset.
LinearRegressionTrainingSummary - org.apache.spark.ml.regression中的类
Linear regression training results.
LinearRegressionWithSGD - org.apache.spark.mllib.regression中的类
Train a linear regression model with no regularization using Stochastic Gradient Descent.
LinearSVC - org.apache.spark.ml.classification中的类
Linear SVM Classifier This binary classifier optimizes the Hinge Loss using the OWLQN optimizer.
LinearSVC(String) - 类 的构造器org.apache.spark.ml.classification.LinearSVC
 
LinearSVC() - 类 的构造器org.apache.spark.ml.classification.LinearSVC
 
LinearSVCModel - org.apache.spark.ml.classification中的类
Linear SVM Model trained by LinearSVC
LinearSVCParams - org.apache.spark.ml.classification中的接口
Params for linear SVM Classifier.
link(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
 
link(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
 
link(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
 
link() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
link(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
 
link(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
 
link(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
 
link(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
 
link() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
Param for the name of link function which provides the relationship between the linear predictor and the mean of the distribution function.
link() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
Link$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Link$
 
linkPower() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
linkPower() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
Param for the index in the power link function.
linkPower() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
linkPredictionCol() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
linkPredictionCol() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
Param for link prediction (linear predictor) column name.
linkPredictionCol() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
listColumns(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Returns a list of columns for the given table/view or temporary view.
listColumns(String, String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Returns a list of columns for the given table/view in the specified database.
listDatabases() - 类 中的方法org.apache.spark.sql.catalog.Catalog
Returns a list of databases available across all sessions.
listDatabases(String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
List the names of all the databases that match the specified pattern.
listenerBus() - 接口 中的方法org.apache.spark.ml.MLEvents
 
ListenerBus<L,E> - org.apache.spark.util中的接口
An event bus which posts events to its listeners.
listenerManager() - 类 中的方法org.apache.spark.sql.SparkSession
An interface to register custom QueryExecutionListeners that listen for execution metrics.
listenerManager() - 类 中的方法org.apache.spark.sql.SQLContext
An interface to register custom QueryExecutionListeners that listen for execution metrics.
listeners() - 接口 中的方法org.apache.spark.util.ListenerBus
 
listFiles() - 类 中的方法org.apache.spark.SparkContext
Returns a list of file paths that are added to resources.
listFunctions() - 类 中的方法org.apache.spark.sql.catalog.Catalog
Returns a list of functions registered in the current database.
listFunctions(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Returns a list of functions registered in the specified database.
listFunctions(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Return the names of all functions that match the given pattern in the database.
listingTable(Seq<String>, Function1<T, Seq<Node>>, Iterable<T>, boolean, Option<String>, Seq<String>, boolean, boolean, Seq<Option<String>>) - 类 中的静态方法org.apache.spark.ui.UIUtils
Returns an HTML table constructed by generating a row for each object in a sequence.
listJars() - 类 中的方法org.apache.spark.SparkContext
Returns a list of jar files that are added to resources.
listListeners() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryManager
List all StreamingQueryListeners attached to this StreamingQueryManager.
listNamespaces() - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
listNamespaces(String[]) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
listNamespaces() - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsNamespaces
List top-level namespaces from the catalog.
listNamespaces(String[]) - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsNamespaces
List namespaces in a namespace.
listOrcFiles(String, Configuration) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileOperator
 
listResourceIds(SparkConf, String) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
listTables() - 类 中的方法org.apache.spark.sql.catalog.Catalog
Returns a list of tables/views in the current database.
listTables(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Returns a list of tables/views in the specified database.
listTables(String[]) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
listTables(String[]) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableCatalog
List the tables in a namespace from the catalog.
listTables(String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the names of all tables in the given database.
listTables(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the names of tables in the given database that matches the given pattern.
Lit - org.apache.spark.sql.connector.expressions中的类
Convenience extractor for any Literal.
Lit() - 类 的构造器org.apache.spark.sql.connector.expressions.Lit
 
lit(Object) - 类 中的静态方法org.apache.spark.sql.functions
Creates a Column of literal value.
literal(String) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
literal(T) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Expressions
Create a literal from a value.
Literal<T> - org.apache.spark.sql.connector.expressions中的接口
Represents a constant literal value in the public expression API.
literal(T) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
literal(T, DataType) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
LIVE_ENTITY_UPDATE_MIN_FLUSH_PERIOD() - 类 中的静态方法org.apache.spark.internal.config.Status
 
LIVE_ENTITY_UPDATE_PERIOD() - 类 中的静态方法org.apache.spark.internal.config.Status
 
LiveEntityHelpers - org.apache.spark.status中的类
 
LiveEntityHelpers() - 类 的构造器org.apache.spark.status.LiveEntityHelpers
 
LiveExecutor - org.apache.spark.status中的类
 
LiveExecutor(String, long) - 类 的构造器org.apache.spark.status.LiveExecutor
 
LiveExecutorStageSummary - org.apache.spark.status中的类
 
LiveExecutorStageSummary(int, int, String) - 类 的构造器org.apache.spark.status.LiveExecutorStageSummary
 
LiveJob - org.apache.spark.status中的类
 
LiveJob(int, String, Option<String>, Option<Date>, Seq<Object>, Option<String>, int, Option<Object>) - 类 的构造器org.apache.spark.status.LiveJob
 
LiveRDD - org.apache.spark.status中的类
Tracker for data related to a persisted RDD.
LiveRDD(RDDInfo, StorageLevel) - 类 的构造器org.apache.spark.status.LiveRDD
 
LiveRDDDistribution - org.apache.spark.status中的类
 
LiveRDDDistribution(LiveExecutor) - 类 的构造器org.apache.spark.status.LiveRDDDistribution
 
LiveRDDPartition - org.apache.spark.status中的类
Data about a single partition of a cached RDD.
LiveRDDPartition(String, StorageLevel) - 类 的构造器org.apache.spark.status.LiveRDDPartition
 
LiveStage - org.apache.spark.status中的类
 
LiveStage() - 类 的构造器org.apache.spark.status.LiveStage
 
LiveTask - org.apache.spark.status中的类
 
LiveTask(TaskInfo, int, int, Option<Object>) - 类 的构造器org.apache.spark.status.LiveTask
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.GBTClassificationModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.GBTClassifier
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.LinearSVC
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.LinearSVCModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.LogisticRegression
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.LogisticRegressionModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.NaiveBayes
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.NaiveBayesModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.OneVsRest
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.OneVsRestModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.classification.RandomForestClassifier
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.BisectingKMeans
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.DistributedLDAModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.GaussianMixture
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.KMeans
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.KMeansModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.LDA
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.LocalLDAModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.clustering.PowerIterationClustering
 
load(String) - 类 中的静态方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
load(String) - 类 中的静态方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
load(String) - 类 中的静态方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
load(String) - 类 中的静态方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
load(String) - 类 中的静态方法org.apache.spark.ml.evaluation.RankingEvaluator
 
load(String) - 类 中的静态方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.Binarizer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.Bucketizer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.ChiSqSelector
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.ColumnPruner
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.CountVectorizer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.CountVectorizerModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.DCT
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.ElementwiseProduct
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.FeatureHasher
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.HashingTF
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.IDF
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.IDFModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.Imputer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.ImputerModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.IndexToString
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.Interaction
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.MaxAbsScaler
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.MinHashLSH
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.MinHashLSHModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.MinMaxScaler
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.MinMaxScalerModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.NGram
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.Normalizer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.OneHotEncoder
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.OneHotEncoderModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.PCA
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.PCAModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.PolynomialExpansion
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.QuantileDiscretizer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.RegexTokenizer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.RFormula
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.RobustScaler
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.RobustScalerModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.SQLTransformer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.StandardScaler
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.StandardScalerModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.StopWordsRemover
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.StringIndexer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.StringIndexerModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.Tokenizer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.VectorAssembler
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.VectorAttributeRewriter
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.VectorIndexer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.VectorIndexerModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.VectorSizeHint
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.VectorSlicer
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.Word2Vec
 
load(String) - 类 中的静态方法org.apache.spark.ml.feature.Word2VecModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.fpm.FPGrowth
 
load(String) - 类 中的静态方法org.apache.spark.ml.fpm.FPGrowthModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.Pipeline
 
load(String, SparkContext, String) - 类 中的方法org.apache.spark.ml.Pipeline.SharedReadWrite$
Load metadata and stages for a Pipeline or PipelineModel
load(String) - 类 中的静态方法org.apache.spark.ml.PipelineModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.r.RWrappers
 
load(String) - 类 中的静态方法org.apache.spark.ml.recommendation.ALS
 
load(String) - 类 中的静态方法org.apache.spark.ml.recommendation.ALSModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.GBTRegressionModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.GBTRegressor
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.IsotonicRegression
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.LinearRegression
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.LinearRegressionModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.regression.RandomForestRegressor
 
load(String) - 类 中的静态方法org.apache.spark.ml.tuning.CrossValidator
 
load(String) - 类 中的静态方法org.apache.spark.ml.tuning.CrossValidatorModel
 
load(String) - 类 中的静态方法org.apache.spark.ml.tuning.TrainValidationSplit
 
load(String) - 类 中的静态方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
load(String) - 接口 中的方法org.apache.spark.ml.util.MLReadable
Reads an ML instance from the input path, a shortcut of read.load(path).
load(String) - 类 中的方法org.apache.spark.ml.util.MLReader
Loads the ML component from the input path.
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.classification.LogisticRegressionModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.classification.NaiveBayesModel
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.classification.SVMModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.clustering.BisectingKMeansModel
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0$
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0$
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.clustering.GaussianMixtureModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.clustering.KMeansModel
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0$
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.clustering.LocalLDAModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.clustering.PowerIterationClusteringModel
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.feature.ChiSqSelectorModel
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.feature.Word2VecModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.fpm.FPGrowthModel
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.fpm.PrefixSpanModel
 
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
Load a model from the given path.
load(SparkContext, String) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.regression.IsotonicRegressionModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.regression.LassoModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.regression.LinearRegressionModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.regression.RidgeRegressionModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.tree.model.DecisionTreeModel
 
load(SparkContext, String, String, int) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
 
load(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.tree.model.RandomForestModel
 
load(SparkContext, String) - 接口 中的方法org.apache.spark.mllib.util.Loader
Load a model from the given path.
load(String, SQLConf) - 类 中的静态方法org.apache.spark.sql.connector.catalog.Catalogs
Load and configure a catalog by name.
load(String...) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads input in as a DataFrame, for data sources that support multiple paths.
load() - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads input in as a DataFrame, for data sources that don't require a path (e.g. external key-value stores).
load(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads input in as a DataFrame, for data sources that require a path (e.g. data backed by a local or distributed file system).
load(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads input in as a DataFrame, for data sources that support multiple paths.
load() - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Loads input data stream in as a DataFrame, for data streams that don't require a path (e.g. external key-value stores).
load(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Loads input in as a DataFrame, for data streams that read from some path.
loadClass(String, boolean) - 类 中的方法org.apache.spark.util.ChildFirstURLClassLoader
 
loadClass(String, boolean) - 类 中的方法org.apache.spark.util.ParentClassLoader
 
loadData(SparkContext, String, String) - 类 中的方法org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
Helper method for loading GLM classification model data.
loadData(SparkContext, String, String, int) - 类 中的方法org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
Helper method for loading GLM regression model data.
loadDefaultSparkProperties(SparkConf, String) - 类 中的静态方法org.apache.spark.util.Utils
Load default Spark properties from the given file.
loadDefaultStopWords(String) - 类 中的静态方法org.apache.spark.ml.feature.StopWordsRemover
Loads the default stop words for the given language.
loadDynamicPartitions(String, String, String, LinkedHashMap<String, String>, boolean, int) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Loads new dynamic partitions into an existing table.
Loader<M extends Saveable> - org.apache.spark.mllib.util中的接口
:: DeveloperApi :: Trait for classes which can load models and transformers from files.
loadExtensions(Class<T>, Seq<String>, SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
Create instances of extension classes.
loadImpl(String, SparkSession, String, String) - 类 中的静态方法org.apache.spark.ml.tree.EnsembleModelReadWrite
Helper method for loading a tree ensemble from disk.
loadImpl(Dataset<Row>, Item, ClassTag<Item>) - 类 中的方法org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
 
loadImpl(Dataset<Row>, Item, ClassTag<Item>) - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
 
LoadInstanceEnd<T> - org.apache.spark.ml中的类
Event fired after MLReader.load.
LoadInstanceEnd() - 类 的构造器org.apache.spark.ml.LoadInstanceEnd
 
LoadInstanceStart<T> - org.apache.spark.ml中的类
Event fired before MLReader.load.
LoadInstanceStart(String) - 类 的构造器org.apache.spark.ml.LoadInstanceStart
 
loadLabeledPoints(SparkContext, String, int) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile.
loadLabeledPoints(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile with the default number of partitions.
loadLibSVMFile(SparkContext, String, int, int) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Loads labeled data in the LIBSVM format into an RDD[LabeledPoint].
loadLibSVMFile(SparkContext, String, int) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Loads labeled data in the LIBSVM format into an RDD[LabeledPoint], with the default number of partitions.
loadLibSVMFile(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Loads binary labeled data in the LIBSVM format into an RDD[LabeledPoint], with number of features determined automatically and the default number of partitions.
loadNamespaceMetadata(String[]) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
loadNamespaceMetadata(String[]) - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsNamespaces
Load metadata properties for a namespace.
loadPartition(String, String, String, LinkedHashMap<String, String>, boolean, boolean, boolean) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Loads a static partition into an existing table.
loadRelation(CatalogPlugin, Identifier) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
 
loadTable(CatalogPlugin, Identifier) - 类 中的静态方法org.apache.spark.sql.connector.catalog.CatalogV2Util
 
loadTable(Identifier) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
loadTable(Identifier) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableCatalog
Load table metadata by identifier from the catalog.
loadTable(String, String, boolean, boolean) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Loads data into an existing table.
loadTreeNodes(String, org.apache.spark.ml.util.DefaultParamsReader.Metadata, SparkSession) - 类 中的静态方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite
Load a decision tree from a file.
loadVectors(SparkContext, String, int) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Loads vectors saved using RDD[Vector].saveAsTextFile.
loadVectors(SparkContext, String) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Loads vectors saved using RDD[Vector].saveAsTextFile with the default number of partitions.
LOCAL_BLOCKS_FETCHED() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleRead$
 
LOCAL_BYTES_READ() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleRead$
 
LOCAL_CLUSTER_REGEX() - 类 中的静态方法org.apache.spark.SparkMasterRegex
 
LOCAL_N_FAILURES_REGEX() - 类 中的静态方法org.apache.spark.SparkMasterRegex
 
LOCAL_N_REGEX() - 类 中的静态方法org.apache.spark.SparkMasterRegex
 
LOCAL_SCHEME() - 类 中的静态方法org.apache.spark.util.Utils
Scheme used for files that are locally available on worker nodes in the cluster.
LOCAL_STORE_DIR() - 类 中的静态方法org.apache.spark.internal.config.History
 
localBlocksFetched() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
 
localBlocksFetched() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetrics
 
localBytesRead() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetrics
 
localCanonicalHostName() - 类 中的静态方法org.apache.spark.util.Utils
Get the local machine's FQDN.
localCheckpoint() - 类 中的方法org.apache.spark.rdd.RDD
Mark this RDD for local checkpointing using Spark's existing caching layer.
localCheckpoint() - 类 中的方法org.apache.spark.sql.Dataset
Eagerly locally checkpoints a Dataset and return the new Dataset.
localCheckpoint(boolean) - 类 中的方法org.apache.spark.sql.Dataset
Locally checkpoints a Dataset and return the new Dataset.
LOCALDATE() - 类 中的静态方法org.apache.spark.sql.Encoders
Creates an encoder that serializes instances of the java.time.LocalDate class to the internal representation of nullable Catalyst's DateType.
localDirs() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
 
localDirs() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
 
locale() - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
Locale of the input for case insensitive matching.
localHostName() - 类 中的静态方法org.apache.spark.util.Utils
Get the local machine's hostname.
localHostNameForURI() - 类 中的静态方法org.apache.spark.util.Utils
Get the local machine's URI.
LOCALITY() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
localityAwareTasks() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
 
localitySummary() - 类 中的方法org.apache.spark.status.LiveStage
 
LocalKMeans - org.apache.spark.mllib.clustering中的类
An utility object to run K-means locally.
LocalKMeans() - 类 的构造器org.apache.spark.mllib.clustering.LocalKMeans
 
LocalLDAModel - org.apache.spark.ml.clustering中的类
Local (non-distributed) model fitted by LDA.
LocalLDAModel - org.apache.spark.mllib.clustering中的类
Local LDA model.
localSeqToDatasetHolder(Seq<T>, Encoder<T>) - 类 中的方法org.apache.spark.sql.SQLImplicits
Creates a Dataset from a local Seq.
localSparkRPackagePath() - 类 中的静态方法org.apache.spark.api.r.RUtils
Get the SparkR package path in the local spark distribution.
locate(String, Column) - 类 中的静态方法org.apache.spark.sql.functions
Locate the position of the first occurrence of substr.
locate(String, Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Locate the position of the first occurrence of substr in a string column, after position pos.
location() - 接口 中的方法org.apache.spark.scheduler.MapStatus
Location where this task was run.
location() - 类 中的方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
location() - 类 中的方法org.apache.spark.ui.storage.ExecutorStreamSummary
 
locations() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
 
locationUri() - 类 中的方法org.apache.spark.sql.catalog.Database
 
log() - 接口 中的方法org.apache.spark.internal.Logging
 
log(Function0<Parsers.Parser<T>>, String) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
log(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the natural logarithm of the given value.
log(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the natural logarithm of the given column.
log(double, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the first argument-base logarithm of the second argument.
log(double, String) - 类 中的静态方法org.apache.spark.sql.functions
Returns the first argument-base logarithm of the second argument.
Log$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
 
log10(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the logarithm of the given value in base 10.
log10(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the logarithm of the given value in base 10.
log1p(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the natural logarithm of the given value plus one.
log1p(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the natural logarithm of the given column plus one.
log2(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the logarithm of the given column in base 2.
log2(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the logarithm of the given value in base 2.
logDebug(Function0<String>) - 接口 中的方法org.apache.spark.internal.Logging
 
logDebug(Function0<String>, Throwable) - 接口 中的方法org.apache.spark.internal.Logging
 
logDeprecationWarning(String) - 类 中的静态方法org.apache.spark.SparkConf
Logs a warning message if the given config key is deprecated.
logError(Function0<String>) - 接口 中的方法org.apache.spark.internal.Logging
 
logError(Function0<String>, Throwable) - 接口 中的方法org.apache.spark.internal.Logging
 
logEvent() - 接口 中的方法org.apache.spark.ml.MLEvent
 
logEvent(MLEvent) - 接口 中的方法org.apache.spark.ml.MLEvents
Log MLEvent to send.
logEvent() - 接口 中的方法org.apache.spark.scheduler.SparkListenerEvent
 
Logging - org.apache.spark.internal中的接口
Utility trait for classes that want to log data.
LogicalExpressions - org.apache.spark.sql.connector.expressions中的类
Helper methods for working with the logical expressions API.
LogicalExpressions() - 类 的构造器org.apache.spark.sql.connector.expressions.LogicalExpressions
 
logInfo(Function0<String>) - 接口 中的方法org.apache.spark.internal.Logging
 
logInfo(Function0<String>, Throwable) - 接口 中的方法org.apache.spark.internal.Logging
 
LogisticGradient - org.apache.spark.mllib.optimization中的类
:: DeveloperApi :: Compute gradient and loss for a multinomial logistic loss function, as used in multi-class classification (it is also used in binary logistic regression).
LogisticGradient(int) - 类 的构造器org.apache.spark.mllib.optimization.LogisticGradient
 
LogisticGradient() - 类 的构造器org.apache.spark.mllib.optimization.LogisticGradient
 
LogisticRegression - org.apache.spark.ml.classification中的类
Logistic regression.
LogisticRegression(String) - 类 的构造器org.apache.spark.ml.classification.LogisticRegression
 
LogisticRegression() - 类 的构造器org.apache.spark.ml.classification.LogisticRegression
 
LogisticRegressionDataGenerator - org.apache.spark.mllib.util中的类
:: DeveloperApi :: Generate test data for LogisticRegression.
LogisticRegressionDataGenerator() - 类 的构造器org.apache.spark.mllib.util.LogisticRegressionDataGenerator
 
LogisticRegressionModel - org.apache.spark.ml.classification中的类
Model produced by LogisticRegression.
LogisticRegressionModel - org.apache.spark.mllib.classification中的类
Classification model trained using Multinomial/Binary Logistic Regression.
LogisticRegressionModel(Vector, double, int, int) - 类 的构造器org.apache.spark.mllib.classification.LogisticRegressionModel
 
LogisticRegressionModel(Vector, double) - 类 的构造器org.apache.spark.mllib.classification.LogisticRegressionModel
Constructs a LogisticRegressionModel with weights and intercept for binary classification.
LogisticRegressionParams - org.apache.spark.ml.classification中的接口
Params for logistic regression.
LogisticRegressionSummary - org.apache.spark.ml.classification中的接口
Abstraction for logistic regression results for a given model.
LogisticRegressionSummaryImpl - org.apache.spark.ml.classification中的类
Multiclass logistic regression results for a given model.
LogisticRegressionSummaryImpl(Dataset<Row>, String, String, String, String) - 类 的构造器org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
 
LogisticRegressionTrainingSummary - org.apache.spark.ml.classification中的接口
Abstraction for multiclass logistic regression training results.
LogisticRegressionTrainingSummaryImpl - org.apache.spark.ml.classification中的类
Multiclass logistic regression training results.
LogisticRegressionTrainingSummaryImpl(Dataset<Row>, String, String, String, String, double[]) - 类 的构造器org.apache.spark.ml.classification.LogisticRegressionTrainingSummaryImpl
 
LogisticRegressionWithLBFGS - org.apache.spark.mllib.classification中的类
Train a classification model for Multinomial/Binary Logistic Regression using Limited-memory BFGS.
LogisticRegressionWithLBFGS() - 类 的构造器org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
 
LogisticRegressionWithSGD - org.apache.spark.mllib.classification中的类
Train a classification model for Binary Logistic Regression using Stochastic Gradient Descent.
Logit$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
 
logLikelihood() - 类 中的方法org.apache.spark.ml.clustering.ExpectationAggregator
 
logLikelihood() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureSummary
 
logLikelihood(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.LDAModel
Calculates a lower bound on the log likelihood of the entire corpus.
logLikelihood() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
logLikelihood() - 类 中的方法org.apache.spark.mllib.clustering.ExpectationSum
 
logLikelihood(RDD<Tuple2<Object, Vector>>) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
Calculates a lower bound on the log likelihood of the entire corpus.
logLikelihood(JavaPairRDD<Long, Vector>) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
Java-friendly version of logLikelihood
logLoss(double) - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns the log-loss, aka logistic loss or cross-entropy loss.
LogLoss - org.apache.spark.mllib.tree.loss中的类
:: DeveloperApi :: Class for log loss calculation (for classification).
LogLoss() - 类 的构造器org.apache.spark.mllib.tree.loss.LogLoss
 
logName() - 接口 中的方法org.apache.spark.internal.Logging
 
LogNormalGenerator - org.apache.spark.mllib.random中的类
:: DeveloperApi :: Generates i.i.d. samples from the log normal distribution with the given mean and standard deviation.
LogNormalGenerator(double, double) - 类 的构造器org.apache.spark.mllib.random.LogNormalGenerator
 
logNormalGraph(SparkContext, int, int, double, double, long) - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
Generate a graph whose vertex out degree distribution is log normal.
logNormalJavaRDD(JavaSparkContext, double, double, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.logNormalRDD.
logNormalJavaRDD(JavaSparkContext, double, double, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.logNormalJavaRDD with the default seed.
logNormalJavaRDD(JavaSparkContext, double, double, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.logNormalJavaRDD with the default number of partitions and the default seed.
logNormalJavaVectorRDD(JavaSparkContext, double, double, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.logNormalVectorRDD.
logNormalJavaVectorRDD(JavaSparkContext, double, double, long, int, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.logNormalJavaVectorRDD with the default seed.
logNormalJavaVectorRDD(JavaSparkContext, double, double, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.logNormalJavaVectorRDD with the default number of partitions and the default seed.
logNormalRDD(SparkContext, double, double, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD comprised of i.i.d.
logNormalVectorRDD(SparkContext, double, double, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD[Vector] with vectors containing i.i.d.
logpdf(Vector) - 类 中的方法org.apache.spark.ml.stat.distribution.MultivariateGaussian
Returns the log-density of this multivariate Gaussian at given point, x
logpdf(Vector) - 类 中的方法org.apache.spark.mllib.stat.distribution.MultivariateGaussian
Returns the log-density of this multivariate Gaussian at given point, x
logPerplexity(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.LDAModel
Calculate an upper bound on perplexity.
logPerplexity(RDD<Tuple2<Object, Vector>>) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
Calculate an upper bound on perplexity.
logPerplexity(JavaPairRDD<Long, Vector>) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
Java-friendly version of logPerplexity
logPrior() - 类 中的方法org.apache.spark.ml.clustering.DistributedLDAModel
 
logPrior() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
logResourceInfo(String, Map<String, ResourceInformation>) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
logStartFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
logStartToJson(SparkListenerLogStart) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
logTrace(Function0<String>) - 接口 中的方法org.apache.spark.internal.Logging
 
logTrace(Function0<String>, Throwable) - 接口 中的方法org.apache.spark.internal.Logging
 
logTuningParams(org.apache.spark.ml.util.Instrumentation) - 接口 中的方法org.apache.spark.ml.tuning.ValidatorParams
Instrumentation logging for tuning params including the inner estimator and evaluator info.
logUncaughtExceptions(Function0<T>) - 类 中的静态方法org.apache.spark.util.Utils
Execute the given block, logging and re-throwing any uncaught exception.
logUrlMap() - 类 中的方法org.apache.spark.scheduler.cluster.ExecutorInfo
 
logUrls() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
 
logWarning(Function0<String>) - 接口 中的方法org.apache.spark.internal.Logging
 
logWarning(Function0<String>, Throwable) - 接口 中的方法org.apache.spark.internal.Logging
 
LONG() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable long type.
longAccumulator() - 类 中的方法org.apache.spark.SparkContext
Create and register a long accumulator, which starts with 0 and accumulates inputs by add.
longAccumulator(String) - 类 中的方法org.apache.spark.SparkContext
Create and register a long accumulator, which starts with 0 and accumulates inputs by add.
LongAccumulator - org.apache.spark.util中的类
An accumulator for computing sum, count, and average of 64-bit integers.
LongAccumulator() - 类 的构造器org.apache.spark.util.LongAccumulator
 
LongAccumulatorSource - org.apache.spark.metrics.source中的类
 
LongAccumulatorSource() - 类 的构造器org.apache.spark.metrics.source.LongAccumulatorSource
 
LongExactNumeric - org.apache.spark.sql.types中的类
 
LongExactNumeric() - 类 的构造器org.apache.spark.sql.types.LongExactNumeric
 
LongParam - org.apache.spark.ml.param中的类
:: DeveloperApi :: Specialized version of Param[Long] for Java.
LongParam(String, String, String, Function1<Object, Object>) - 类 的构造器org.apache.spark.ml.param.LongParam
 
LongParam(String, String, String) - 类 的构造器org.apache.spark.ml.param.LongParam
 
LongParam(Identifiable, String, String, Function1<Object, Object>) - 类 的构造器org.apache.spark.ml.param.LongParam
 
LongParam(Identifiable, String, String) - 类 的构造器org.apache.spark.ml.param.LongParam
 
LongType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the LongType object.
LongType - org.apache.spark.sql.types中的类
The data type representing Long values.
LongType() - 类 的构造器org.apache.spark.sql.types.LongType
 
lookup(K) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return the list of values in the RDD for key key.
lookup(K) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return the list of values in the RDD for key key.
LookupCatalog - org.apache.spark.sql.connector.catalog中的接口
A trait to encapsulate catalog lookup function and helpful extractors.
LookupCatalog.AsTableIdentifier - org.apache.spark.sql.connector.catalog中的类
Extract legacy table identifier from a multi-part identifier.
LookupCatalog.AsTableIdentifier$ - org.apache.spark.sql.connector.catalog中的类
Extract legacy table identifier from a multi-part identifier.
LookupCatalog.AsTemporaryViewIdentifier - org.apache.spark.sql.connector.catalog中的类
For temp views, extract a table identifier from a multi-part identifier if it has no catalog.
LookupCatalog.AsTemporaryViewIdentifier$ - org.apache.spark.sql.connector.catalog中的类
For temp views, extract a table identifier from a multi-part identifier if it has no catalog.
LookupCatalog.CatalogAndIdentifierParts - org.apache.spark.sql.connector.catalog中的类
Extract catalog and the rest name parts from a multi-part identifier.
LookupCatalog.CatalogAndIdentifierParts$ - org.apache.spark.sql.connector.catalog中的类
Extract catalog and the rest name parts from a multi-part identifier.
LookupCatalog.CatalogAndNamespace - org.apache.spark.sql.connector.catalog中的类
Extract catalog and namespace from a multi-part identifier with the current catalog if needed.
LookupCatalog.CatalogAndNamespace$ - org.apache.spark.sql.connector.catalog中的类
Extract catalog and namespace from a multi-part identifier with the current catalog if needed.
LookupCatalog.CatalogObjectIdentifier - org.apache.spark.sql.connector.catalog中的类
Extract catalog and identifier from a multi-part identifier with the current catalog if needed.
LookupCatalog.CatalogObjectIdentifier$ - org.apache.spark.sql.connector.catalog中的类
Extract catalog and identifier from a multi-part identifier with the current catalog if needed.
lookupRpcTimeout(SparkConf) - 类 中的静态方法org.apache.spark.util.RpcUtils
Returns the default Spark timeout to use for RPC remote endpoint lookup.
loss(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>) - 接口 中的方法org.apache.spark.ml.ann.LossFunction
Returns the value of loss function.
loss() - 接口 中的方法org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
The current loss value of this aggregator.
loss() - 接口 中的方法org.apache.spark.ml.param.shared.HasLoss
Param for the loss function to be optimized.
loss() - 类 中的方法org.apache.spark.ml.regression.AFTAggregator
 
loss() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
loss() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
loss() - 接口 中的方法org.apache.spark.ml.regression.LinearRegressionParams
The loss function to be optimized.
loss() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
Loss - org.apache.spark.mllib.tree.loss中的接口
:: DeveloperApi :: Trait for adding "pluggable" loss functions for the gradient boosting algorithm.
Losses - org.apache.spark.mllib.tree.loss中的类
 
Losses() - 类 的构造器org.apache.spark.mllib.tree.loss.Losses
 
LossFunction - org.apache.spark.ml.ann中的接口
Trait for loss function
LossReasonPending - org.apache.spark.scheduler中的类
A loss reason that means we don't yet know why the executor exited.
LossReasonPending() - 类 的构造器org.apache.spark.scheduler.LossReasonPending
 
lossSum() - 接口 中的方法org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
 
lossType() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
lossType() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
lossType() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
lossType() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
lossType() - 接口 中的方法org.apache.spark.ml.tree.GBTClassifierParams
Loss function which GBT tries to minimize.
lossType() - 接口 中的方法org.apache.spark.ml.tree.GBTRegressorParams
Loss function which GBT tries to minimize.
LOST() - 类 中的静态方法org.apache.spark.TaskState
 
low() - 类 中的方法org.apache.spark.partial.BoundedDouble
 
lower() - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
lower() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
lower() - 接口 中的方法org.apache.spark.ml.feature.RobustScalerParams
Lower quantile to calculate quantile range, shared by all features Default: 0.25
lower(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts a string column to lower case.
lowerBoundsOnCoefficients() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
lowerBoundsOnCoefficients() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
lowerBoundsOnCoefficients() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
The lower bounds on coefficients if fitting under bound constrained optimization.
lowerBoundsOnIntercepts() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
lowerBoundsOnIntercepts() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
lowerBoundsOnIntercepts() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
The lower bounds on intercepts if fitting under bound constrained optimization.
LowPrioritySQLImplicits - org.apache.spark.sql中的接口
Lower priority implicit methods for converting Scala objects into Datasets.
lpad(Column, int, String) - 类 中的静态方法org.apache.spark.sql.functions
Left-pad the string column with pad to a length of len.
LSHParams - org.apache.spark.ml.feature中的接口
Params for LSH.
lt(double) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Check if value is less than upperBound
lt(Object) - 类 中的方法org.apache.spark.sql.Column
Less than.
lt(T, T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
lt(T, T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
lt(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
lt(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
lt(T, T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
lt(T, T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
lt(T, T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
ltEq(double) - 类 中的静态方法org.apache.spark.ml.param.ParamValidators
Check if value is less than or equal to upperBound
lteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
lteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
lteq(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
lteq(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
lteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
lteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
lteq(T, T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
ltrim(Column) - 类 中的静态方法org.apache.spark.sql.functions
Trim the spaces from left end for the specified string value.
ltrim(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Trim the specified character string from left end for the specified string column.
LZ4CompressionCodec - org.apache.spark.io中的类
:: DeveloperApi :: LZ4 implementation of CompressionCodec.
LZ4CompressionCodec(SparkConf) - 类 的构造器org.apache.spark.io.LZ4CompressionCodec
 
LZFCompressionCodec - org.apache.spark.io中的类
:: DeveloperApi :: LZF implementation of CompressionCodec.
LZFCompressionCodec(SparkConf) - 类 的构造器org.apache.spark.io.LZFCompressionCodec
 

M

main(String[]) - 类 中的静态方法org.apache.spark.ml.param.shared.SharedParamsCodeGen
 
main(String[]) - 类 中的静态方法org.apache.spark.mllib.util.KMeansDataGenerator
 
main(String[]) - 类 中的静态方法org.apache.spark.mllib.util.LinearDataGenerator
 
main(String[]) - 类 中的静态方法org.apache.spark.mllib.util.LogisticRegressionDataGenerator
 
main(String[]) - 类 中的静态方法org.apache.spark.mllib.util.MFDataGenerator
 
main(String[]) - 类 中的静态方法org.apache.spark.mllib.util.SVMDataGenerator
 
main(String[]) - 类 中的静态方法org.apache.spark.streaming.util.RawTextSender
 
main(String[]) - 类 中的静态方法org.apache.spark.ui.UIWorkloadGenerator
 
main(String[]) - 接口 中的方法org.apache.spark.util.CommandLineUtils
 
majorMinorVersion(String) - 类 中的静态方法org.apache.spark.util.VersionUtils
Given a Spark version string, return the (major version number, minor version number).
majorVersion(String) - 类 中的静态方法org.apache.spark.util.VersionUtils
Given a Spark version string, return the major version number.
makeBinarySearch(Ordering<K>, ClassTag<K>) - 类 中的静态方法org.apache.spark.util.CollectionsUtils
 
makeCopy() - 接口 中的方法org.apache.spark.sql.Encoder
Create a copied Encoder.
makeDescription(String, String, boolean) - 类 中的静态方法org.apache.spark.ui.UIUtils
Returns HTML rendering of a job or stage description.
makeDriverRef(String, SparkConf, org.apache.spark.rpc.RpcEnv) - 类 中的静态方法org.apache.spark.util.RpcUtils
Retrieve a RpcEndpointRef which is located in the driver via its name.
makeHref(boolean, String, String) - 类 中的静态方法org.apache.spark.ui.UIUtils
Return the correct Href after checking if master is running in the reverse proxy mode or not.
makeProgressBar(int, int, int, int, Map<String, Object>, int) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
makeRDD(Seq<T>, int, ClassTag<T>) - 类 中的方法org.apache.spark.SparkContext
Distribute a local Scala collection to form an RDD.
makeRDD(Seq<Tuple2<T, Seq<String>>>, ClassTag<T>) - 类 中的方法org.apache.spark.SparkContext
Distribute a local Scala collection to form an RDD, with one or more location preferences (hostnames of Spark nodes) for each object.
makeRDDForPartitionedTable(Seq<Partition>) - 接口 中的方法org.apache.spark.sql.hive.TableReader
 
makeRDDForTable(Table) - 接口 中的方法org.apache.spark.sql.hive.TableReader
 
map(Function<T, R>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to all elements of this RDD.
map(Function1<Object, Object>) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Map the values of this matrix using a function.
map(Function1<Object, Object>) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Map the values of this matrix using a function.
map(Function1<R, T>) - 类 中的方法org.apache.spark.partial.PartialResult
Transform this PartialResult into a PartialResult of type T.
map(Function1<T, U>, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.RDD
Return a new RDD by applying a function to all elements of this RDD.
map(DataType, DataType) - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type map.
map(MapType) - 类 中的方法org.apache.spark.sql.ColumnName
 
map(Function1<T, U>, Encoder<U>) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Returns a new Dataset that contains the result of applying func to each element.
map(MapFunction<T, U>, Encoder<U>) - 类 中的方法org.apache.spark.sql.Dataset
(Java-specific) Returns a new Dataset that contains the result of applying func to each element.
map(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new map column.
map(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new map column.
map(Function<T, U>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream by applying a function to all elements of this DStream.
map(Function1<T, U>, ClassTag<U>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream by applying a function to all elements of this DStream.
map_concat(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Returns the union of all the given maps.
map_concat(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns the union of all the given maps.
map_entries(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns an unordered array of all entries in the given map.
map_filter(Column, Function2<Column, Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns a map whose key-value pairs satisfy a predicate.
map_from_arrays(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new map column.
map_from_entries(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns a map created from the given array of entries.
map_keys(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns an unordered array containing the keys of the map.
map_values(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns an unordered array containing the values of the map.
map_zip_with(Column, Column, Function3<Column, Column, Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Merge two given maps, key-wise into a single map using a function.
mapAsSerializableJavaMap(Map<A, B>) - 类 中的静态方法org.apache.spark.api.java.JavaUtils
 
mapEdgePartitions(Function2<Object, EdgePartition<ED, VD>, EdgePartition<ED2, VD2>>, ClassTag<ED2>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
mapEdges(Function1<Edge<ED>, ED2>, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.Graph
Transforms each edge attribute in the graph using the map function.
mapEdges(Function2<Object, Iterator<Edge<ED>>, Iterator<ED2>>, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.Graph
Transforms each edge attribute using the map function, passing it a whole partition at a time.
mapEdges(Function2<Object, Iterator<Edge<ED>>, Iterator<ED2>>, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
mapFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
-------------------------------- * Util JSON deserialization methods |
MapFunction<T,U> - org.apache.spark.api.java.function中的接口
Base interface for a map function used in Dataset's map function.
mapGroups(Function2<K, Iterator<V>, U>, Encoder<U>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Scala-specific) Applies the given function to each group of data.
mapGroups(MapGroupsFunction<K, V, U>, Encoder<U>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Java-specific) Applies the given function to each group of data.
MapGroupsFunction<K,V,R> - org.apache.spark.api.java.function中的接口
Base interface for a map function used in GroupedDataset's mapGroup function.
mapGroupsWithState(Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
mapGroupsWithState(GroupStateTimeout, Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Java-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>, GroupStateTimeout) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Java-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
MapGroupsWithStateFunction<K,V,S,R> - org.apache.spark.api.java.function中的接口
mapId() - 类 中的方法org.apache.spark.FetchFailed
 
mapId() - 接口 中的方法org.apache.spark.scheduler.MapStatus
The unique ID of this shuffle map task, if spark.shuffle.useOldFetchProtocol enabled we use partitionId of the task or taskContext.taskAttemptId is used.
mapId() - 类 中的方法org.apache.spark.storage.ShuffleBlockBatchId
 
mapId() - 类 中的方法org.apache.spark.storage.ShuffleBlockId
 
mapId() - 类 中的方法org.apache.spark.storage.ShuffleDataBlockId
 
mapId() - 类 中的方法org.apache.spark.storage.ShuffleIndexBlockId
 
mapIndex() - 类 中的方法org.apache.spark.FetchFailed
 
mapOutputTracker() - 类 中的方法org.apache.spark.SparkEnv
 
MapOutputTrackerMessage - org.apache.spark中的接口
 
mapPartitions(FlatMapFunction<Iterator<T>, U>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to each partition of this RDD.
mapPartitions(FlatMapFunction<Iterator<T>, U>, boolean) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to each partition of this RDD.
mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.RDD
Return a new RDD by applying a function to each partition of this RDD.
mapPartitions(Function1<Iterator<T>, Iterator<S>>, boolean, ClassTag<S>) - 类 中的方法org.apache.spark.rdd.RDDBarrier
:: Experimental :: Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage.
mapPartitions(Function1<Iterator<T>, Iterator<U>>, Encoder<U>) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Returns a new Dataset that contains the result of applying func to each partition.
mapPartitions(MapPartitionsFunction<T, U>, Encoder<U>) - 类 中的方法org.apache.spark.sql.Dataset
(Java-specific) Returns a new Dataset that contains the result of applying f to each partition.
mapPartitions(FlatMapFunction<Iterator<T>, U>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream.
mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream.
MapPartitionsFunction<T,U> - org.apache.spark.api.java.function中的接口
Base interface for function used in Dataset's mapPartitions.
mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to each partition of this RDD.
mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>, boolean) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to each partition of this RDD.
mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to each partition of this RDD.
mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>, boolean) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to each partition of this RDD.
mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream.
mapPartitionsWithIndex(Function2<Integer, Iterator<T>, Iterator<R>>, boolean) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.
mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.RDD
Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.
mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<S>>, boolean, ClassTag<S>) - 类 中的方法org.apache.spark.rdd.RDDBarrier
:: Experimental :: Returns a new RDD by applying a function to each partition of the wrapped RDD, while tracking the index of the original partition.
mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<R>>, boolean) - 类 中的方法org.apache.spark.api.java.JavaHadoopRDD
Maps over a partition, providing the InputSplit that was used as the base of the partition.
mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<R>>, boolean) - 类 中的方法org.apache.spark.api.java.JavaNewHadoopRDD
Maps over a partition, providing the InputSplit that was used as the base of the partition.
mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<U>>, boolean, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.HadoopRDD
Maps over a partition, providing the InputSplit that was used as the base of the partition.
mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<U>>, boolean, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.NewHadoopRDD
Maps over a partition, providing the InputSplit that was used as the base of the partition.
MappedPoolMemory - org.apache.spark.metrics中的类
 
MappedPoolMemory() - 类 的构造器org.apache.spark.metrics.MappedPoolMemory
 
mapredInputFormat() - 类 中的方法org.apache.spark.scheduler.InputFormatInfo
 
mapreduceInputFormat() - 类 中的方法org.apache.spark.scheduler.InputFormatInfo
 
mapSideCombine() - 类 中的方法org.apache.spark.ShuffleDependency
 
MapStatus - org.apache.spark.scheduler中的接口
Result returned by a ShuffleMapTask to a scheduler.
mapStatuses() - 类 中的方法org.apache.spark.ShuffleStatus
MapStatus for each partition.
mapToDouble(DoubleFunction<T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to all elements of this RDD.
mapToJson(Map<String, String>) - 类 中的静态方法org.apache.spark.util.JsonProtocol
------------------------------ * Util JSON serialization methods |
mapToPair(PairFunction<T, K2, V2>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return a new RDD by applying a function to all elements of this RDD.
mapToPair(PairFunction<T, K2, V2>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream by applying a function to all elements of this DStream.
mapTriplets(Function1<EdgeTriplet<VD, ED>, ED2>, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.Graph
Transforms each edge attribute using the map function, passing it the adjacent vertex attributes as well.
mapTriplets(Function1<EdgeTriplet<VD, ED>, ED2>, TripletFields, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.Graph
Transforms each edge attribute using the map function, passing it the adjacent vertex attributes as well.
mapTriplets(Function2<Object, Iterator<EdgeTriplet<VD, ED>>, Iterator<ED2>>, TripletFields, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.Graph
Transforms each edge attribute a partition at a time using the map function, passing it the adjacent vertex attributes as well.
mapTriplets(Function2<Object, Iterator<EdgeTriplet<VD, ED>>, Iterator<ED2>>, TripletFields, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
MapType - org.apache.spark.sql.types中的类
The data type for Maps.
MapType(DataType, DataType, boolean) - 类 的构造器org.apache.spark.sql.types.MapType
 
MapType() - 类 的构造器org.apache.spark.sql.types.MapType
No-arg constructor for kryo.
mapValues(Function<V, U>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning.
mapValues(Function1<Edge<ED>, ED2>, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.EdgeRDD
Map the values in an edge partitioning preserving the structure but changing the values.
mapValues(Function1<Edge<ED>, ED2>, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
mapValues(Function1<VD, VD2>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
mapValues(Function2<Object, VD, VD2>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
mapValues(Function1<VD, VD2>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.VertexRDD
Maps each vertex attribute, preserving the index.
mapValues(Function2<Object, VD, VD2>, ClassTag<VD2>) - 类 中的方法org.apache.spark.graphx.VertexRDD
Maps each vertex attribute, additionally supplying the vertex ID.
mapValues(Function1<V, U>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning.
mapValues(Function1<V, W>, Encoder<W>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Returns a new KeyValueGroupedDataset where the given function func has been applied to the data.
mapValues(MapFunction<V, W>, Encoder<W>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
Returns a new KeyValueGroupedDataset where the given function func has been applied to the data.
mapValues(Function<V, U>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.
mapValues(Function1<V, U>, ClassTag<U>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.
mapVertices(Function2<Object, VD, VD2>, ClassTag<VD2>, Predef.$eq$colon$eq<VD, VD2>) - 类 中的方法org.apache.spark.graphx.Graph
Transforms each vertex attribute in the graph using the map function.
mapVertices(Function2<Object, VD, VD2>, ClassTag<VD2>, Predef.$eq$colon$eq<VD, VD2>) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
mapWithState(StateSpec<K, V, StateType, MappedType>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a JavaMapWithStateDStream by applying a function to every key-value element of this stream, while maintaining some state data for each unique key.
mapWithState(StateSpec<K, V, StateType, MappedType>, ClassTag<StateType>, ClassTag<MappedType>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a MapWithStateDStream by applying a function to every key-value element of this stream, while maintaining some state data for each unique key.
MapWithStateDStream<KeyType,ValueType,StateType,MappedType> - org.apache.spark.streaming.dstream中的类
DStream representing the stream of data generated by mapWithState operation on a pair DStream.
MapWithStateDStream(StreamingContext, ClassTag<MappedType>) - 类 的构造器org.apache.spark.streaming.dstream.MapWithStateDStream
 
mark(int) - 类 中的方法org.apache.spark.storage.BufferReleasingInputStream
 
markSupported() - 类 中的方法org.apache.spark.storage.BufferReleasingInputStream
 
mask(Graph<VD2, ED2>, ClassTag<VD2>, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.Graph
Restricts the graph to only the vertices and edges that are also in other, but keeps the attributes from this graph.
mask(Graph<VD2, ED2>, ClassTag<VD2>, ClassTag<ED2>) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
master() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
MASTER() - 类 中的静态方法org.apache.spark.metrics.MetricsSystemInstances
 
master() - 类 中的方法org.apache.spark.SparkContext
 
master(String) - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Sets the Spark master URL to connect to, such as "local" to run locally, "local[4]" to run locally with 4 cores, or "spark://master:7077" to run on a Spark standalone cluster.
Matrices - org.apache.spark.ml.linalg中的类
Factory methods for Matrix.
Matrices() - 类 的构造器org.apache.spark.ml.linalg.Matrices
 
Matrices - org.apache.spark.mllib.linalg中的类
Factory methods for Matrix.
Matrices() - 类 的构造器org.apache.spark.mllib.linalg.Matrices
 
Matrix - org.apache.spark.ml.linalg中的接口
Trait for a local matrix.
Matrix - org.apache.spark.mllib.linalg中的接口
Trait for a local matrix.
MatrixEntry - org.apache.spark.mllib.linalg.distributed中的类
Represents an entry in a distributed matrix.
MatrixEntry(long, long, double) - 类 的构造器org.apache.spark.mllib.linalg.distributed.MatrixEntry
 
MatrixFactorizationModel - org.apache.spark.mllib.recommendation中的类
Model representing the result of matrix factorization.
MatrixFactorizationModel(int, RDD<Tuple2<Object, double[]>>, RDD<Tuple2<Object, double[]>>) - 类 的构造器org.apache.spark.mllib.recommendation.MatrixFactorizationModel
 
MatrixFactorizationModel.SaveLoadV1_0$ - org.apache.spark.mllib.recommendation中的类
 
MatrixImplicits - org.apache.spark.mllib.linalg中的类
Implicit methods available in Scala for converting Matrix to Matrix and vice versa.
MatrixImplicits() - 类 的构造器org.apache.spark.mllib.linalg.MatrixImplicits
 
MatrixType() - 类 中的静态方法org.apache.spark.ml.linalg.SQLDataTypes
Data type for Matrix.
max() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Returns the maximum element from this RDD as defined by the default comparator natural order.
max(Comparator<T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Returns the maximum element from this RDD as defined by the specified Comparator[T].
MAX() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
max() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
max() - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
max() - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
max() - 接口 中的方法org.apache.spark.ml.feature.MinMaxScalerParams
upper bound after transformation, shared by all features Default: 1.0
max(Column, Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
max(Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
max() - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Maximum value of each dimension.
max() - 接口 中的方法org.apache.spark.mllib.stat.MultivariateStatisticalSummary
Maximum value of each column.
max(Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Returns the max of this RDD as defined by the implicit Ordering[T].
max(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the maximum value of the expression in a group.
max(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the maximum value of the column in a group.
max(String...) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the max value for each numeric columns for each group.
max(Seq<String>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the max value for each numeric columns for each group.
max(T, T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
max(T, T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
max(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
max(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
max(T, T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
max(T, T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
max(T, T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
max(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
max(Time) - 类 中的方法org.apache.spark.streaming.Time
 
max(long, long) - 类 中的静态方法org.apache.spark.streaming.util.RawTextHelper
 
max() - 类 中的方法org.apache.spark.util.StatCounter
 
MAX_DRIVER_LOG_AGE_S() - 类 中的静态方法org.apache.spark.internal.config.History
 
MAX_EXECUTOR_RETRIES() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
MAX_FEATURES_FOR_NORMAL_SOLVER() - 类 中的静态方法org.apache.spark.ml.regression.LinearRegression
When using LinearRegression.solver == "normal", the solver must limit the number of features to at most this number.
MAX_INT_DIGITS() - 类 中的静态方法org.apache.spark.sql.types.Decimal
Maximum number of decimal digits an Int can represent
MAX_LOCAL_DISK_USAGE() - 类 中的静态方法org.apache.spark.internal.config.History
 
MAX_LOG_AGE_S() - 类 中的静态方法org.apache.spark.internal.config.History
 
MAX_LOG_NUM() - 类 中的静态方法org.apache.spark.internal.config.History
 
MAX_LONG_DIGITS() - 类 中的静态方法org.apache.spark.sql.types.Decimal
Maximum number of decimal digits a Long can represent
MAX_PRECISION() - 类 中的静态方法org.apache.spark.sql.types.DecimalType
 
MAX_RETAINED_DEAD_EXECUTORS() - 类 中的静态方法org.apache.spark.internal.config.Status
 
MAX_RETAINED_JOBS() - 类 中的静态方法org.apache.spark.internal.config.Status
 
MAX_RETAINED_ROOT_NODES() - 类 中的静态方法org.apache.spark.internal.config.Status
 
MAX_RETAINED_STAGES() - 类 中的静态方法org.apache.spark.internal.config.Status
 
MAX_RETAINED_TASKS_PER_STAGE() - 类 中的静态方法org.apache.spark.internal.config.Status
 
MAX_SCALE() - 类 中的静态方法org.apache.spark.sql.types.DecimalType
 
maxAbs() - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
MaxAbsScaler - org.apache.spark.ml.feature中的类
Rescale each feature individually to range [-1, 1] by dividing through the largest maximum absolute value in each feature.
MaxAbsScaler(String) - 类 的构造器org.apache.spark.ml.feature.MaxAbsScaler
 
MaxAbsScaler() - 类 的构造器org.apache.spark.ml.feature.MaxAbsScaler
 
MaxAbsScalerModel - org.apache.spark.ml.feature中的类
Model fitted by MaxAbsScaler.
MaxAbsScalerParams - org.apache.spark.ml.feature中的接口
maxBins() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
maxBins() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
maxBins() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
maxBins() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
maxBins() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
maxBins() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
maxBins() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
maxBins() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
maxBins() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
maxBins() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
maxBins() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
maxBins() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
maxBins() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
Maximum number of bins used for discretizing continuous features and for choosing how to split on features at each node.
maxBins() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
maxBufferSizeMb() - 类 中的方法org.apache.spark.serializer.KryoSerializer
 
maxCategories() - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
maxCategories() - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
maxCategories() - 接口 中的方法org.apache.spark.ml.feature.VectorIndexerParams
Threshold for the number of values a categorical feature can take.
maxCores() - 类 中的方法org.apache.spark.status.api.v1.ApplicationInfo
 
maxDepth() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
maxDepth() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
maxDepth() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
maxDepth() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
maxDepth() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
maxDepth() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
maxDepth() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
maxDepth() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
maxDepth() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
maxDepth() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
maxDepth() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
maxDepth() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
maxDepth() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
Maximum depth of the tree (nonnegative).
maxDepth() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
maxDF() - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
maxDF() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
maxDF() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
Specifies the maximum number of different documents a term could appear in to be included in the vocabulary.
maxId() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.Algo
 
maxId() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
maxId() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.FeatureType
 
maxId() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
maxId() - 类 中的静态方法org.apache.spark.rdd.CheckpointState
 
maxId() - 类 中的静态方法org.apache.spark.rdd.DeterministicLevel
 
maxId() - 类 中的静态方法org.apache.spark.scheduler.SchedulingMode
 
maxId() - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
maxId() - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverState
 
maxId() - 类 中的静态方法org.apache.spark.TaskState
 
maxIter() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
maxIter() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
maxIter() - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
maxIter() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
maxIter() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
maxIter() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
maxIter() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
maxIter() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
maxIter() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
maxIter() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
maxIter() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
maxIter() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
maxIter() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
maxIter() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
maxIter() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
maxIter() - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
maxIter() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
maxIter() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
maxIter() - 接口 中的方法org.apache.spark.ml.param.shared.HasMaxIter
Param for maximum number of iterations (&gt;= 0).
maxIter() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
maxIter() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
maxIter() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
maxIter() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
maxIter() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
maxIter() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
maxIter() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
maxIter() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
maxIter() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
maxIters() - 类 中的方法org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
maxLocalProjDBSize() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
Param for the maximum number of items (including delimiters used in the internal storage format) allowed in a projected database before local processing (default: 32000000).
maxMem() - 类 中的方法org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
maxMemory() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
maxMemory() - 类 中的方法org.apache.spark.status.LiveExecutor
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
maxMemoryInMB() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
maxMemoryInMB() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
Maximum memory in MB allocated to histogram aggregation.
maxMemoryInMB() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
maxMessageSizeBytes(SparkConf) - 类 中的静态方法org.apache.spark.util.RpcUtils
Returns the configured max message size for messages in bytes.
maxNodesInLevel(int) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Return the maximum number of nodes which can be in the given level of the tree.
maxNumConcurrentTasks() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
Get the max number of tasks that can be concurrent launched currently.
maxOffHeapMem() - 类 中的方法org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
maxOffHeapMemSize() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
 
maxOnHeapMem() - 类 中的方法org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
maxOnHeapMemSize() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
 
maxPatternLength() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
Param for the maximal pattern length (default: 10).
maxPrecisionForBytes(int) - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
maxReplicas() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
 
maxSentenceLength() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
maxSentenceLength() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
Sets the maximum length (in words) of each sentence in the input data.
maxSentenceLength() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
maxSplitFeatureIndex() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeModel
Trace down the tree, and return the largest feature index used in any split.
maxTasks() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
maxTasks() - 类 中的方法org.apache.spark.status.LiveExecutor
 
maxVal() - 类 中的方法org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
maybeUpdateOutputMetrics(OutputMetrics, Function0<Object>, long) - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriterUtils
 
md5(Column) - 类 中的静态方法org.apache.spark.sql.functions
Calculates the MD5 digest of a binary column and returns the value as a 32 character hex string.
mean() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Compute the mean of this RDD's elements.
mean() - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
mean() - 类 中的方法org.apache.spark.ml.stat.distribution.MultivariateGaussian
 
mean(Column, Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
mean(Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
mean() - 类 中的方法org.apache.spark.mllib.feature.StandardScalerModel
 
mean() - 类 中的方法org.apache.spark.mllib.random.ExponentialGenerator
 
mean() - 类 中的方法org.apache.spark.mllib.random.LogNormalGenerator
 
mean() - 类 中的方法org.apache.spark.mllib.random.PoissonGenerator
 
mean() - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Sample mean of each dimension.
mean() - 接口 中的方法org.apache.spark.mllib.stat.MultivariateStatisticalSummary
Sample mean vector.
mean() - 类 中的方法org.apache.spark.partial.BoundedDouble
 
mean() - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Compute the mean of this RDD's elements.
mean(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the average of the values in a group.
mean(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the average of the values in a group.
mean(String...) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the average value for each numeric columns for each group.
mean(Seq<String>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the average value for each numeric columns for each group.
mean() - 类 中的方法org.apache.spark.util.StatCounter
 
meanAbsoluteError() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
Returns the mean absolute error, which is a risk function corresponding to the expected value of the absolute error loss or l1-norm loss.
meanAbsoluteError() - 类 中的方法org.apache.spark.mllib.evaluation.RegressionMetrics
Returns the mean absolute error, which is a risk function corresponding to the expected value of the absolute error loss or l1-norm loss.
meanApprox(long, Double) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return the approximate mean of the elements in this RDD.
meanApprox(long) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Approximate operation to return the mean within a timeout.
meanApprox(long, double) - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Approximate operation to return the mean within a timeout.
meanAveragePrecision() - 类 中的方法org.apache.spark.mllib.evaluation.RankingMetrics
 
meanAveragePrecisionAt(int) - 类 中的方法org.apache.spark.mllib.evaluation.RankingMetrics
Returns the mean average precision (MAP) at ranking position k of all the queries.
means() - 类 中的方法org.apache.spark.ml.clustering.ExpectationAggregator
 
means() - 类 中的方法org.apache.spark.mllib.clustering.ExpectationSum
 
meanSquaredError() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
Returns the mean squared error, which is a risk function corresponding to the expected value of the squared error loss or quadratic loss.
meanSquaredError() - 类 中的方法org.apache.spark.mllib.evaluation.RegressionMetrics
Returns the mean squared error, which is a risk function corresponding to the expected value of the squared error loss or quadratic loss.
median() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
megabytesToString(long) - 类 中的静态方法org.apache.spark.util.Utils
Convert a quantity in megabytes to a human-readable string such as "4.0 MiB".
MEM_SPILL() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
MEMORY_AND_DISK - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
MEMORY_AND_DISK() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
MEMORY_AND_DISK_2 - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
MEMORY_AND_DISK_2() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
MEMORY_AND_DISK_SER - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
MEMORY_AND_DISK_SER() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
MEMORY_AND_DISK_SER_2 - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
MEMORY_AND_DISK_SER_2() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
MEMORY_BYTES_SPILLED() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
MEMORY_ONLY - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
MEMORY_ONLY() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
MEMORY_ONLY_2 - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
MEMORY_ONLY_2() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
MEMORY_ONLY_SER - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
MEMORY_ONLY_SER() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
MEMORY_ONLY_SER_2 - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
MEMORY_ONLY_SER_2() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
memoryBytesSpilled() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
memoryBytesSpilled() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
memoryBytesSpilled() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
memoryBytesSpilled() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
memoryCost(int, int) - 类 中的静态方法org.apache.spark.mllib.feature.PCAUtil
 
MemoryEntry<T> - org.apache.spark.storage.memory中的接口
 
MemoryEntryBuilder<T> - org.apache.spark.storage.memory中的接口
 
memoryManager() - 类 中的方法org.apache.spark.SparkEnv
 
memoryMetrics() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
MemoryMetrics - org.apache.spark.status.api.v1中的类
 
memoryMode() - 类 中的方法org.apache.spark.storage.memory.DeserializedMemoryEntry
 
memoryMode() - 接口 中的方法org.apache.spark.storage.memory.MemoryEntry
 
memoryMode() - 类 中的方法org.apache.spark.storage.memory.SerializedMemoryEntry
 
MemoryParam - org.apache.spark.util中的类
An extractor object for parsing JVM memory strings, such as "10g", into an Int representing the number of megabytes.
MemoryParam() - 类 的构造器org.apache.spark.util.MemoryParam
 
memoryPerExecutorMB() - 类 中的方法org.apache.spark.status.api.v1.ApplicationInfo
 
memoryRemaining() - 类 中的方法org.apache.spark.status.api.v1.RDDDataDistribution
 
memoryStringToMb(String) - 类 中的静态方法org.apache.spark.util.Utils
Convert a Java memory parameter passed to -Xmx (such as 300m or 1g) to a number of mebibytes.
memoryUsed() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
memoryUsed() - 类 中的方法org.apache.spark.status.api.v1.RDDDataDistribution
 
memoryUsed() - 类 中的方法org.apache.spark.status.api.v1.RDDPartitionInfo
 
memoryUsed() - 类 中的方法org.apache.spark.status.api.v1.RDDStorageInfo
 
memoryUsed() - 类 中的方法org.apache.spark.status.LiveExecutor
 
memoryUsed() - 类 中的方法org.apache.spark.status.LiveRDD
 
memoryUsed() - 类 中的方法org.apache.spark.status.LiveRDDDistribution
 
memoryUsed() - 类 中的方法org.apache.spark.status.LiveRDDPartition
 
memoryUsedBytes() - 类 中的方法org.apache.spark.sql.streaming.StateOperatorProgress
 
memSize() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
memSize() - 类 中的方法org.apache.spark.storage.BlockStatus
 
memSize() - 类 中的方法org.apache.spark.storage.BlockUpdatedInfo
 
memSize() - 类 中的方法org.apache.spark.storage.RDDInfo
 
merge(ExpectationAggregator) - 类 中的方法org.apache.spark.ml.clustering.ExpectationAggregator
Merge another ExpectationAggregator, update the weights, means and covariances for each distributions, and update the log likelihood.
merge(OpenHashMap<String, Object>[], OpenHashMap<String, Object>[]) - 类 中的方法org.apache.spark.ml.feature.StringIndexerAggregator
 
merge(Agg) - 接口 中的方法org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
Merge two aggregators.
merge(AFTAggregator) - 类 中的方法org.apache.spark.ml.regression.AFTAggregator
Merge another AFTAggregator, and update the loss and gradient of the objective function.
merge(IDF.DocumentFrequencyAggregator) - 类 中的方法org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
Merges another.
merge(MultivariateOnlineSummarizer) - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Merge another MultivariateOnlineSummarizer, and update the statistical summary.
merge(int, U) - 接口 中的方法org.apache.spark.partial.ApproximateEvaluator
 
merge(BUF, BUF) - 类 中的方法org.apache.spark.sql.expressions.Aggregator
Merge two intermediate values.
merge(MutableAggregationBuffer, Row) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Merges two aggregation buffers and stores the updated buffer values back to buffer1.
merge(AccumulatorV2<IN, OUT>) - 类 中的方法org.apache.spark.util.AccumulatorV2
Merges another same-type accumulator into this one and update its state, i.e. this should be merge-in-place.
merge(AccumulatorV2<T, List<T>>) - 类 中的方法org.apache.spark.util.CollectionAccumulator
 
merge(AccumulatorV2<Double, Double>) - 类 中的方法org.apache.spark.util.DoubleAccumulator
 
merge(AccumulatorV2<Long, Long>) - 类 中的方法org.apache.spark.util.LongAccumulator
 
merge(double) - 类 中的方法org.apache.spark.util.StatCounter
Add a value into this StatCounter, updating the internal statistics.
merge(TraversableOnce<Object>) - 类 中的方法org.apache.spark.util.StatCounter
Add multiple values into this StatCounter, updating the internal statistics.
merge(StatCounter) - 类 中的方法org.apache.spark.util.StatCounter
Merge another StatCounter into this one, adding up the internal statistics.
mergeCombiners() - 类 中的方法org.apache.spark.Aggregator
 
mergeInPlace(BloomFilter) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
Combines this bloom filter with another bloom filter by performing a bitwise OR of the underlying data.
mergeInPlace(CountMinSketch) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Merges another CountMinSketch with this one in place.
mergeOffsets(PartitionOffset[]) - 接口 中的方法org.apache.spark.sql.connector.read.streaming.ContinuousStream
Merge partitioned offsets coming from ContinuousPartitionReader instances for each partition to a single global offset.
mergeValue() - 类 中的方法org.apache.spark.Aggregator
 
MESOS_CLUSTER() - 类 中的静态方法org.apache.spark.metrics.MetricsSystemInstances
 
message() - 类 中的方法org.apache.spark.FetchFailed
 
message() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
 
message() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
 
message() - 类 中的静态方法org.apache.spark.scheduler.ExecutorKilled
 
message() - 类 中的静态方法org.apache.spark.scheduler.LossReasonPending
 
message() - 异常错误 中的方法org.apache.spark.sql.AnalysisException
 
message() - 异常错误 中的方法org.apache.spark.sql.streaming.StreamingQueryException
 
message() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryStatus
 
MessageLoop - org.apache.spark.rpc.netty中的类
A message loop used by Dispatcher to deliver messages to endpoints.
MessageLoop(Dispatcher) - 类 的构造器org.apache.spark.rpc.netty.MessageLoop
 
MetaAlgorithmReadWrite - org.apache.spark.ml.util中的类
Default Meta-Algorithm read and write implementation.
MetaAlgorithmReadWrite() - 类 的构造器org.apache.spark.ml.util.MetaAlgorithmReadWrite
 
Metadata - org.apache.spark.sql.types中的类
Metadata is a wrapper over Map[String, Any] that limits the value type to simple ones: Boolean, Long, Double, String, Metadata, Array[Boolean], Array[Long], Array[Double], Array[String], and Array[Metadata].
metadata() - 类 中的方法org.apache.spark.sql.types.StructField
 
metadata() - 类 中的方法org.apache.spark.streaming.scheduler.StreamInputInfo
 
METADATA_KEY_DESCRIPTION() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamInputInfo
The key for description in StreamInputInfo.metadata.
MetadataBuilder - org.apache.spark.sql.types中的类
Builder for Metadata.
MetadataBuilder() - 类 的构造器org.apache.spark.sql.types.MetadataBuilder
 
metadataDescription() - 类 中的方法org.apache.spark.streaming.scheduler.StreamInputInfo
 
MetadataUtils - org.apache.spark.ml.util中的类
Helper utilities for algorithms using ML metadata
MetadataUtils() - 类 的构造器org.apache.spark.ml.util.MetadataUtils
 
Method(String, Function2<Object, Object, Object>) - 类 的构造器org.apache.spark.mllib.stat.test.ChiSqTest.Method
 
method() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTestResult
 
Method$() - 类 的构造器org.apache.spark.mllib.stat.test.ChiSqTest.Method$
 
MethodIdentifier<T> - org.apache.spark.util中的类
Helper class to identify a method.
MethodIdentifier(Class<T>, String, String) - 类 的构造器org.apache.spark.util.MethodIdentifier
 
methodName() - 接口 中的方法org.apache.spark.mllib.stat.test.StreamingTestMethod
 
methodName() - 类 中的静态方法org.apache.spark.mllib.stat.test.StudentTTest
 
methodName() - 类 中的静态方法org.apache.spark.mllib.stat.test.WelchTTest
 
METRIC_COMPILATION_TIME() - 类 中的静态方法org.apache.spark.metrics.source.CodegenMetrics
Histogram of the time it took to compile source code text (in milliseconds).
METRIC_FILE_CACHE_HITS() - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
Tracks the total number of files served from the file status cache instead of discovered.
METRIC_FILES_DISCOVERED() - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
Tracks the total number of files discovered off of the filesystem by InMemoryFileIndex.
METRIC_GENERATED_CLASS_BYTECODE_SIZE() - 类 中的静态方法org.apache.spark.metrics.source.CodegenMetrics
Histogram of the bytecode size of each class generated by CodeGenerator.
METRIC_GENERATED_METHOD_BYTECODE_SIZE() - 类 中的静态方法org.apache.spark.metrics.source.CodegenMetrics
Histogram of the bytecode size of each method in classes generated by CodeGenerator.
METRIC_HIVE_CLIENT_CALLS() - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
Tracks the total number of Hive client calls (e.g. to lookup a table).
METRIC_PARALLEL_LISTING_JOB_COUNT() - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
Tracks the total number of Spark jobs launched for parallel file listing.
METRIC_PARTITIONS_FETCHED() - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
Tracks the total number of partition metadata entries fetched via the client api.
METRIC_SOURCE_CODE_SIZE() - 类 中的静态方法org.apache.spark.metrics.source.CodegenMetrics
Histogram of the length of source code text compiled by CodeGenerator (in characters).
metricLabel() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
metricLabel() - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
metricName() - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
param for metric name in evaluation (supports "areaUnderROC" (default), "areaUnderPR")
metricName() - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
param for metric name in evaluation (supports "silhouette" (default))
metricName() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
param for metric name in evaluation (supports "f1" (default), "accuracy", "weightedPrecision", "weightedRecall", "weightedTruePositiveRate", "weightedFalsePositiveRate", "weightedFMeasure", "truePositiveRateByLabel", "falsePositiveRateByLabel", "precisionByLabel", "recallByLabel", "fMeasureByLabel", "logLoss")
metricName() - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
param for metric name in evaluation (supports "f1Measure" (default), "subsetAccuracy", "accuracy", "hammingLoss", "precision", "recall", "precisionByLabel", "recallByLabel", "f1MeasureByLabel", "microPrecision", "microRecall", "microF1Measure")
metricName() - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
param for metric name in evaluation (supports "meanAveragePrecision" (default), "meanAveragePrecisionAtK", "precisionAtK", "ndcgAtK", "recallAtK")
metricName() - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
Param for metric name in evaluation.
metricPeaks() - 类 中的方法org.apache.spark.TaskKilled
 
metricRegistry - 类 中的变量org.apache.spark.ExecutorPluginContext
 
metricRegistry() - 类 中的静态方法org.apache.spark.metrics.source.CodegenMetrics
 
metricRegistry() - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
 
metricRegistry() - 接口 中的方法org.apache.spark.metrics.source.Source
 
metrics(String...) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
Given a list of metrics, provides a builder that it turns computes metrics from a column.
metrics(Seq<String>) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
Given a list of metrics, provides a builder that it turns computes metrics from a column.
metrics() - 类 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
metrics() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
metrics() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
metrics() - 类 中的方法org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 
metrics() - 类 中的方法org.apache.spark.status.LiveExecutorStageSummary
 
metrics() - 类 中的方法org.apache.spark.status.LiveStage
 
METRICS_PREFIX() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
metricsSystem() - 类 中的方法org.apache.spark.SparkEnv
 
MetricsSystemInstances - org.apache.spark.metrics中的类
 
MetricsSystemInstances() - 类 的构造器org.apache.spark.metrics.MetricsSystemInstances
 
MFDataGenerator - org.apache.spark.mllib.util中的类
:: DeveloperApi :: Generate RDD(s) containing data for Matrix Factorization.
MFDataGenerator() - 类 的构造器org.apache.spark.mllib.util.MFDataGenerator
 
MicroBatchStream - org.apache.spark.sql.connector.read.streaming中的接口
A SparkDataStream for streaming queries with micro-batch mode.
microF1Measure() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
 
microPrecision() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
 
microRecall() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
 
mightContain(Object) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
Returns true if the element might have been put in this Bloom filter, false if this is definitely not the case.
mightContainBinary(byte[]) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
A specialized variant of BloomFilter.mightContain(Object) that only tests byte array items.
mightContainLong(long) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
A specialized variant of BloomFilter.mightContain(Object) that only tests long items.
mightContainString(String) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
A specialized variant of BloomFilter.mightContain(Object) that only tests String items.
milliseconds() - 类 中的方法org.apache.spark.streaming.Duration
 
milliseconds(long) - 类 中的静态方法org.apache.spark.streaming.Durations
 
Milliseconds - org.apache.spark.streaming中的类
Helper object that creates instance of Duration representing a given number of milliseconds.
Milliseconds() - 类 的构造器org.apache.spark.streaming.Milliseconds
 
milliseconds() - 类 中的方法org.apache.spark.streaming.Time
 
millisToString(long) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
Reformat a time interval in milliseconds to a prettier format for output
min() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Returns the minimum element from this RDD as defined by the default comparator natural order.
min(Comparator<T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Returns the minimum element from this RDD as defined by the specified Comparator[T].
MIN() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
min() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
min() - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
min() - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
min() - 接口 中的方法org.apache.spark.ml.feature.MinMaxScalerParams
lower bound after transformation, shared by all features Default: 0.0
min(Column, Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
min(Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
min() - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Minimum value of each dimension.
min() - 接口 中的方法org.apache.spark.mllib.stat.MultivariateStatisticalSummary
Minimum value of each column.
min(Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Returns the min of this RDD as defined by the implicit Ordering[T].
min(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the minimum value of the expression in a group.
min(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the minimum value of the column in a group.
min(String...) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the min value for each numeric column for each group.
min(Seq<String>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the min value for each numeric column for each group.
min(T, T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
min(T, T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
min(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
min(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
min(T, T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
min(T, T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
min(T, T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
min(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
min(Time) - 类 中的方法org.apache.spark.streaming.Time
 
min() - 类 中的方法org.apache.spark.util.StatCounter
 
minBytesForPrecision() - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
minConfidence() - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
minConfidence() - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
minConfidence() - 接口 中的方法org.apache.spark.ml.fpm.FPGrowthParams
Minimal confidence for generating Association Rule. minConfidence will not affect the mining for frequent itemsets, but will affect the association rules generation.
minCount() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
minCount() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
The minimum number of times a token must appear to be included in the word2vec model's vocabulary.
minCount() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
minDF() - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
minDF() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
minDF() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
Specifies the minimum number of different documents a term must appear in to be included in the vocabulary.
minDivisibleClusterSize() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
minDivisibleClusterSize() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
minDivisibleClusterSize() - 接口 中的方法org.apache.spark.ml.clustering.BisectingKMeansParams
The minimum number of points (if greater than or equal to 1.0) or the minimum proportion of points (if less than 1.0) of a divisible cluster (default: 1.0).
minDocFreq() - 类 中的方法org.apache.spark.ml.feature.IDF
 
minDocFreq() - 接口 中的方法org.apache.spark.ml.feature.IDFBase
The minimum number of documents in which a term should appear.
minDocFreq() - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
minDocFreq() - 类 中的方法org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
 
minDocFreq() - 类 中的方法org.apache.spark.mllib.feature.IDF
 
MinHashLSH - org.apache.spark.ml.feature中的类
LSH class for Jaccard distance.
MinHashLSH(String) - 类 的构造器org.apache.spark.ml.feature.MinHashLSH
 
MinHashLSH() - 类 的构造器org.apache.spark.ml.feature.MinHashLSH
 
MinHashLSHModel - org.apache.spark.ml.feature中的类
Model produced by MinHashLSH, where multiple hash functions are stored.
MINIMUM_ADJUSTED_SCALE() - 类 中的静态方法org.apache.spark.sql.types.DecimalType
 
minInfoGain() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
minInfoGain() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
minInfoGain() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
minInfoGain() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
minInfoGain() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
minInfoGain() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
minInfoGain() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
minInfoGain() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
minInfoGain() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
minInfoGain() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
minInfoGain() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
minInfoGain() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
minInfoGain() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
Minimum information gain for a split to be considered at a tree node.
minInfoGain() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
minInstancesPerNode() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
minInstancesPerNode() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
Minimum number of instances each child must have after split.
minInstancesPerNode() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
MinMax() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
MinMaxScaler - org.apache.spark.ml.feature中的类
Rescale each feature individually to a common range [min, max] linearly using column summary statistics, which is also known as min-max normalization or Rescaling.
MinMaxScaler(String) - 类 的构造器org.apache.spark.ml.feature.MinMaxScaler
 
MinMaxScaler() - 类 的构造器org.apache.spark.ml.feature.MinMaxScaler
 
MinMaxScalerModel - org.apache.spark.ml.feature中的类
Model fitted by MinMaxScaler.
MinMaxScalerParams - org.apache.spark.ml.feature中的接口
minorVersion(String) - 类 中的静态方法org.apache.spark.util.VersionUtils
Given a Spark version string, return the minor version number.
minSamplingRate() - 类 中的静态方法org.apache.spark.util.random.BinomialBounds
 
minShare() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
minSupport() - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
minSupport() - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
minSupport() - 接口 中的方法org.apache.spark.ml.fpm.FPGrowthParams
Minimal support level of the frequent pattern. [0.0, 1.0].
minSupport() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
Param for the minimal support level (default: 0.1).
minTF() - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
minTF() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
minTF() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
Filter to ignore rare words in a document.
minTokenLength() - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
Minimum token length, greater than or equal to 0.
minus(RDD<Tuple2<Object, VD>>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
minus(VertexRDD<VD>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
minus(RDD<Tuple2<Object, VD>>) - 类 中的方法org.apache.spark.graphx.VertexRDD
For each VertexId present in both this and other, minus will act as a set difference operation returning only those unique VertexId's present in this.
minus(VertexRDD<VD>) - 类 中的方法org.apache.spark.graphx.VertexRDD
For each VertexId present in both this and other, minus will act as a set difference operation returning only those unique VertexId's present in this.
minus(Object) - 类 中的方法org.apache.spark.sql.Column
Subtraction.
minus(byte, byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
minus(Decimal, Decimal) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
minus(Decimal, Decimal) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
minus(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
minus(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
minus(int, int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
minus(long, long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
minus(short, short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
minus(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
minus(Time) - 类 中的方法org.apache.spark.streaming.Time
 
minus(Duration) - 类 中的方法org.apache.spark.streaming.Time
 
minute(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the minutes as an integer from a given date/timestamp/string.
minutes() - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
minutes(long) - 类 中的静态方法org.apache.spark.streaming.Durations
 
Minutes - org.apache.spark.streaming中的类
Helper object that creates instance of Duration representing a given number of minutes.
Minutes() - 类 的构造器org.apache.spark.streaming.Minutes
 
minVal() - 类 中的方法org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
minWeightFractionPerNode() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
minWeightFractionPerNode() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
Minimum fraction of the weighted sample count that each child must have after split.
minWeightFractionPerNode() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
missingValue() - 类 中的方法org.apache.spark.ml.feature.Imputer
 
missingValue() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
missingValue() - 接口 中的方法org.apache.spark.ml.feature.ImputerParams
The placeholder for the missing values.
mkList() - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
mkNumericOps(T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
mkNumericOps(T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
mkNumericOps(T) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
mkNumericOps(T) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
mkNumericOps(T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
mkNumericOps(T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
mkNumericOps(T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
mkOrderingOps(T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
mkOrderingOps(T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
mkOrderingOps(T) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
mkOrderingOps(T) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
mkOrderingOps(T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
mkOrderingOps(T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
mkOrderingOps(T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
mkString() - 接口 中的方法org.apache.spark.sql.Row
Displays all elements of this sequence in a string (without a separator).
mkString(String) - 接口 中的方法org.apache.spark.sql.Row
Displays all elements of this sequence in a string using a separator string.
mkString(String, String, String) - 接口 中的方法org.apache.spark.sql.Row
Displays all elements of this traversable or iterator in a string using start, end, and separator strings.
mkString(String, String, String) - 类 中的方法org.apache.spark.status.api.v1.StackTrace
 
ML_ATTR() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
mlDenseMatrixToMLlibDenseMatrix(DenseMatrix) - 类 中的静态方法org.apache.spark.mllib.linalg.MatrixImplicits
 
mlDenseVectorToMLlibDenseVector(DenseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.VectorImplicits
 
MLEvent - org.apache.spark.ml中的接口
Event emitted by ML operations.
MLEvents - org.apache.spark.ml中的接口
A small trait that defines some methods to send MLEvent.
MLFormatRegister - org.apache.spark.ml.util中的接口
ML export formats for should implement this trait so that users can specify a shortname rather than the fully qualified class name of the exporter.
mllibDenseMatrixToMLDenseMatrix(DenseMatrix) - 类 中的静态方法org.apache.spark.mllib.linalg.MatrixImplicits
 
mllibDenseVectorToMLDenseVector(DenseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.VectorImplicits
 
mllibMatrixToMLMatrix(Matrix) - 类 中的静态方法org.apache.spark.mllib.linalg.MatrixImplicits
 
mllibSparseMatrixToMLSparseMatrix(SparseMatrix) - 类 中的静态方法org.apache.spark.mllib.linalg.MatrixImplicits
 
mllibSparseVectorToMLSparseVector(SparseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.VectorImplicits
 
mllibVectorToMLVector(Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.VectorImplicits
 
mlMatrixToMLlibMatrix(Matrix) - 类 中的静态方法org.apache.spark.mllib.linalg.MatrixImplicits
 
MLPairRDDFunctions<K,V> - org.apache.spark.mllib.rdd中的类
:: DeveloperApi :: Machine learning specific Pair RDD functions.
MLPairRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - 类 的构造器org.apache.spark.mllib.rdd.MLPairRDDFunctions
 
MLReadable<T> - org.apache.spark.ml.util中的接口
Trait for objects that provide MLReader.
MLReader<T> - org.apache.spark.ml.util中的类
Abstract class for utility classes that can load ML instances.
MLReader() - 类 的构造器org.apache.spark.ml.util.MLReader
 
mlSparseMatrixToMLlibSparseMatrix(SparseMatrix) - 类 中的静态方法org.apache.spark.mllib.linalg.MatrixImplicits
 
mlSparseVectorToMLlibSparseVector(SparseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.VectorImplicits
 
MLUtils - org.apache.spark.mllib.util中的类
Helper methods to load, save and pre-process data used in MLLib.
MLUtils() - 类 的构造器org.apache.spark.mllib.util.MLUtils
 
mlVectorToMLlibVector(Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.VectorImplicits
 
MLWritable - org.apache.spark.ml.util中的接口
Trait for classes that provide MLWriter.
MLWriter - org.apache.spark.ml.util中的类
Abstract class for utility classes that can save ML instances in Spark's internal format.
MLWriter() - 类 的构造器org.apache.spark.ml.util.MLWriter
 
MLWriterFormat - org.apache.spark.ml.util中的接口
Abstract class to be implemented by objects that provide ML exportability.
mod(Object) - 类 中的方法org.apache.spark.sql.Column
Modulo (a.k.a. remainder) expression.
mode(SaveMode) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Specifies the behavior when data or table already exists.
mode(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Specifies the behavior when data or table already exists.
mode() - 接口 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectBase
 
mode() - 类 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
mode() - 类 中的方法org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 
model(Vector) - 接口 中的方法org.apache.spark.ml.ann.Topology
 
model(long) - 接口 中的方法org.apache.spark.ml.ann.Topology
 
model() - 类 中的方法org.apache.spark.ml.FitEnd
 
Model<M extends Model<M>> - org.apache.spark.ml中的类
:: DeveloperApi :: A fitted model, i.e., a Transformer produced by an Estimator.
Model() - 类 的构造器org.apache.spark.ml.Model
 
models() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
modelType() - 类 中的方法org.apache.spark.ml.classification.NaiveBayes
 
modelType() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
modelType() - 接口 中的方法org.apache.spark.ml.classification.NaiveBayesParams
The model type which is a string (case-sensitive).
modelType() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel
 
modelType() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
 
MODIFY_ACLS() - 类 中的静态方法org.apache.spark.internal.config.UI
 
MODIFY_ACLS_GROUPS() - 类 中的静态方法org.apache.spark.internal.config.UI
 
MODULE$ - 类 中的静态变量org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.graphx.PartitionStrategy.EdgePartition1D$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.graphx.PartitionStrategy.EdgePartition2D$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.internal.io.FileCommitProtocol.EmptyTaskCommitMessage$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.InternalAccumulator.input$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.InternalAccumulator.output$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.InternalAccumulator.shuffleRead$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.InternalAccumulator.shuffleWrite$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.Pipeline.SharedReadWrite$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.recommendation.ALS.InBlock$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.recommendation.ALS.Rating$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.recommendation.ALS.RatingBlock$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Family$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Link$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.fpm.PrefixSpan.Postfix$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.fpm.PrefixSpan.Prefix$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.stat.test.ChiSqTest.Method$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisteredExecutor$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveDelegationTokens$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.connector.catalog.LookupCatalog.AsTemporaryViewIdentifier$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifierParts$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogObjectIdentifier$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.hive.HiveShim.HiveFunctionWrapper$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.hive.HiveStrategies.HiveTableScans$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.hive.HiveStrategies.Scripts$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.RelationalGroupedDataset.CubeType$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.RelationalGroupedDataset.GroupByType$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.RelationalGroupedDataset.PivotType$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.RelationalGroupedDataset.RollupType$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.types.Decimal.DecimalIsFractional$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.types.DecimalType.Expression$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.sql.types.DecimalType.Fixed$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.GetBlockStatus$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.GetLocations$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.GetPeers$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.GetStorageStatus$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.RemoveBlock$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.RemoveExecutor$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.RemoveRdd$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.RemoveShuffle$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.ReplicateBlock$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo$
Static reference to the singleton instance of this Scala object.
MODULE$ - 类 中的静态变量org.apache.spark.ui.JettyUtils.ServletParams$
Static reference to the singleton instance of this Scala object.
monotonically_increasing_id() - 类 中的静态方法org.apache.spark.sql.functions
A column expression that generates monotonically increasing 64-bit integers.
month(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the month as an integer from a given date/timestamp/string.
months(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Expressions
Create a monthly transform for a timestamp or date column.
months(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
months(Column) - 类 中的静态方法org.apache.spark.sql.functions
A transform for timestamps and dates to partition data into months.
months_between(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns number of months between dates start and end.
months_between(Column, Column, boolean) - 类 中的静态方法org.apache.spark.sql.functions
Returns number of months between dates end and start.
msDurationToString(long) - 类 中的静态方法org.apache.spark.util.Utils
Returns a human-readable string representing a duration such as "35ms"
MsSqlServerDialect - org.apache.spark.sql.jdbc中的类
 
MsSqlServerDialect() - 类 的构造器org.apache.spark.sql.jdbc.MsSqlServerDialect
 
mu() - 类 中的方法org.apache.spark.mllib.stat.distribution.MultivariateGaussian
 
MulticlassClassificationEvaluator - org.apache.spark.ml.evaluation中的类
Evaluator for multiclass classification, which expects input columns: prediction, label, weight (optional) and probability (only for logLoss).
MulticlassClassificationEvaluator(String) - 类 的构造器org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
MulticlassClassificationEvaluator() - 类 的构造器org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
MulticlassMetrics - org.apache.spark.mllib.evaluation中的类
Evaluator for multiclass classification.
MulticlassMetrics(RDD<? extends Product>) - 类 的构造器org.apache.spark.mllib.evaluation.MulticlassMetrics
 
MultilabelClassificationEvaluator - org.apache.spark.ml.evaluation中的类
:: Experimental :: Evaluator for multi-label classification, which expects two input columns: prediction and label.
MultilabelClassificationEvaluator(String) - 类 的构造器org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
MultilabelClassificationEvaluator() - 类 的构造器org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
MultilabelMetrics - org.apache.spark.mllib.evaluation中的类
Evaluator for multilabel classification.
MultilabelMetrics(RDD<Tuple2<double[], double[]>>) - 类 的构造器org.apache.spark.mllib.evaluation.MultilabelMetrics
 
multiLabelValidator(int) - 类 中的静态方法org.apache.spark.mllib.util.DataValidators
Function to check if labels used for k class multi-label classification are in the range of {0, 1, ..., k - 1}.
MultilayerPerceptronClassificationModel - org.apache.spark.ml.classification中的类
Classification model based on the Multilayer Perceptron.
MultilayerPerceptronClassifier - org.apache.spark.ml.classification中的类
Classifier trainer based on the Multilayer Perceptron.
MultilayerPerceptronClassifier(String) - 类 的构造器org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
MultilayerPerceptronClassifier() - 类 的构造器org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
MultilayerPerceptronParams - org.apache.spark.ml.classification中的接口
Params for Multilayer Perceptron.
MultipartIdentifierHelper(Seq<String>) - 类 的构造器org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
 
multiply(DenseMatrix) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Convenience method for Matrix-DenseMatrix multiplication.
multiply(DenseVector) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Convenience method for Matrix-DenseVector multiplication.
multiply(Vector) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Convenience method for Matrix-Vector multiplication.
multiply(BlockMatrix) - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Left multiplies this BlockMatrix to other, another BlockMatrix.
multiply(BlockMatrix, int) - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Left multiplies this BlockMatrix to other, another BlockMatrix.
multiply(Matrix) - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Multiply this matrix by a local matrix on the right.
multiply(Matrix) - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Multiply this matrix by a local matrix on the right.
multiply(DenseMatrix) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Convenience method for Matrix-DenseMatrix multiplication.
multiply(DenseVector) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Convenience method for Matrix-DenseVector multiplication.
multiply(Vector) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Convenience method for Matrix-Vector multiplication.
multiply(Object) - 类 中的方法org.apache.spark.sql.Column
Multiplication of this expression and another expression.
MultivariateGaussian - org.apache.spark.ml.stat.distribution中的类
This class provides basic functionality for a Multivariate Gaussian (Normal) Distribution.
MultivariateGaussian(Vector, Matrix) - 类 的构造器org.apache.spark.ml.stat.distribution.MultivariateGaussian
 
MultivariateGaussian - org.apache.spark.mllib.stat.distribution中的类
:: DeveloperApi :: This class provides basic functionality for a Multivariate Gaussian (Normal) Distribution.
MultivariateGaussian(Vector, Matrix) - 类 的构造器org.apache.spark.mllib.stat.distribution.MultivariateGaussian
 
MultivariateOnlineSummarizer - org.apache.spark.mllib.stat中的类
:: DeveloperApi :: MultivariateOnlineSummarizer implements MultivariateStatisticalSummary to compute the mean, variance, minimum, maximum, counts, and nonzero counts for instances in sparse or dense vector format in an online fashion.
MultivariateOnlineSummarizer() - 类 的构造器org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
 
MultivariateStatisticalSummary - org.apache.spark.mllib.stat中的接口
Trait for multivariate statistical summary of a data matrix.
MutableAggregationBuffer - org.apache.spark.sql.expressions中的类
A Row representing a mutable aggregation buffer.
MutableAggregationBuffer() - 类 的构造器org.apache.spark.sql.expressions.MutableAggregationBuffer
 
MutablePair<T1,T2> - org.apache.spark.util中的类
:: DeveloperApi :: A tuple of 2 elements.
MutablePair(T1, T2) - 类 的构造器org.apache.spark.util.MutablePair
 
MutablePair() - 类 的构造器org.apache.spark.util.MutablePair
No-arg constructor for serialization
MutableURLClassLoader - org.apache.spark.util中的类
URL class loader that exposes the `addURL` method in URLClassLoader.
MutableURLClassLoader(URL[], ClassLoader) - 类 的构造器org.apache.spark.util.MutableURLClassLoader
 
myName() - 类 中的方法org.apache.spark.util.InnerClosureFinder
 
MySQLDialect - org.apache.spark.sql.jdbc中的类
 
MySQLDialect() - 类 的构造器org.apache.spark.sql.jdbc.MySQLDialect
 

N

n() - 类 中的方法org.apache.spark.ml.feature.NGram
Minimum n-gram length, greater than or equal to 1.
n() - 类 中的方法org.apache.spark.mllib.optimization.NNLS.Workspace
 
na() - 类 中的方法org.apache.spark.sql.Dataset
Returns a DataFrameNaFunctions for working with missing data.
NaiveBayes - org.apache.spark.ml.classification中的类
Naive Bayes Classifiers.
NaiveBayes(String) - 类 的构造器org.apache.spark.ml.classification.NaiveBayes
 
NaiveBayes() - 类 的构造器org.apache.spark.ml.classification.NaiveBayes
 
NaiveBayes - org.apache.spark.mllib.classification中的类
Trains a Naive Bayes model given an RDD of (label, features) pairs.
NaiveBayes(double) - 类 的构造器org.apache.spark.mllib.classification.NaiveBayes
 
NaiveBayes() - 类 的构造器org.apache.spark.mllib.classification.NaiveBayes
 
NaiveBayesModel - org.apache.spark.ml.classification中的类
Model produced by NaiveBayes param: pi log of class priors, whose dimension is C (number of classes) param: theta log of class conditional probabilities, whose dimension is C (number of classes) by D (number of features)
NaiveBayesModel - org.apache.spark.mllib.classification中的类
Model for Naive Bayes Classifiers.
NaiveBayesModel.SaveLoadV1_0$ - org.apache.spark.mllib.classification中的类
 
NaiveBayesModel.SaveLoadV1_0$.Data - org.apache.spark.mllib.classification中的类
Model data for model import/export
NaiveBayesModel.SaveLoadV1_0$.Data$ - org.apache.spark.mllib.classification中的类
 
NaiveBayesModel.SaveLoadV2_0$ - org.apache.spark.mllib.classification中的类
 
NaiveBayesModel.SaveLoadV2_0$.Data - org.apache.spark.mllib.classification中的类
Model data for model import/export
NaiveBayesModel.SaveLoadV2_0$.Data$ - org.apache.spark.mllib.classification中的类
 
NaiveBayesParams - org.apache.spark.ml.classification中的接口
Params for Naive Bayes Classifiers.
name() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
 
name() - 类 中的方法org.apache.spark.ml.attribute.Attribute
Name of the attribute.
name() - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
 
NAME() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
name() - 类 中的方法org.apache.spark.ml.attribute.AttributeType
 
name() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
name() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
name() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
name() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
name() - 类 中的方法org.apache.spark.ml.param.Param
 
name() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
 
name() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
 
name() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
 
name() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
 
name() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
 
name() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
 
name() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
 
name() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTest.Method
 
name() - 类 中的方法org.apache.spark.rdd.RDD
A friendly name for this RDD
name() - 类 中的方法org.apache.spark.resource.ResourceInformation
 
name() - 类 中的方法org.apache.spark.resource.ResourceInformationJson
 
name() - 类 中的方法org.apache.spark.scheduler.AccumulableInfo
 
name() - 类 中的方法org.apache.spark.scheduler.AsyncEventQueue
 
name() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
name() - 类 中的方法org.apache.spark.scheduler.StageInfo
 
name() - 接口 中的方法org.apache.spark.SparkStageInfo
 
name() - 类 中的方法org.apache.spark.SparkStageInfoImpl
 
name() - 类 中的方法org.apache.spark.sql.catalog.Column
 
name() - 类 中的方法org.apache.spark.sql.catalog.Database
 
name() - 类 中的方法org.apache.spark.sql.catalog.Function
 
name() - 类 中的方法org.apache.spark.sql.catalog.Table
 
name(String) - 类 中的方法org.apache.spark.sql.Column
Gives the column a name (alias).
name() - 接口 中的方法org.apache.spark.sql.connector.catalog.CatalogPlugin
Called to get this catalog's name.
name() - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
name() - 接口 中的方法org.apache.spark.sql.connector.catalog.Identifier
 
name() - 接口 中的方法org.apache.spark.sql.connector.catalog.Table
A name to identify this table.
name() - 接口 中的方法org.apache.spark.sql.connector.expressions.Transform
Returns the transform function name.
name() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Returns the user-specified name of the query, or null if not specified.
name() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
 
name() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
name(String) - 类 中的方法org.apache.spark.sql.TypedColumn
Gives the TypedColumn a name (alias).
name() - 类 中的方法org.apache.spark.sql.types.StructField
 
name() - 类 中的方法org.apache.spark.status.api.v1.AccumulableInfo
 
name() - 类 中的方法org.apache.spark.status.api.v1.ApplicationInfo
 
name() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
name() - 类 中的方法org.apache.spark.status.api.v1.RDDStorageInfo
 
name() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
name() - 类 中的方法org.apache.spark.status.api.v1.streaming.OutputOperationInfo
 
name() - 类 中的方法org.apache.spark.storage.BlockId
A globally unique identifier for this Block.
name() - 类 中的方法org.apache.spark.storage.BroadcastBlockId
 
name() - 类 中的方法org.apache.spark.storage.RDDBlockId
 
name() - 类 中的方法org.apache.spark.storage.RDDInfo
 
name() - 类 中的方法org.apache.spark.storage.ShuffleBlockBatchId
 
name() - 类 中的方法org.apache.spark.storage.ShuffleBlockId
 
name() - 类 中的方法org.apache.spark.storage.ShuffleDataBlockId
 
name() - 类 中的方法org.apache.spark.storage.ShuffleIndexBlockId
 
name() - 类 中的方法org.apache.spark.storage.StreamBlockId
 
name() - 类 中的方法org.apache.spark.storage.TaskResultBlockId
 
name() - 类 中的方法org.apache.spark.streaming.scheduler.OutputOperationInfo
 
name() - 类 中的方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
name() - 类 中的方法org.apache.spark.util.AccumulatorV2
Returns the name of this accumulator, can only be called after registration.
name() - 类 中的方法org.apache.spark.util.MethodIdentifier
 
NamedReference - org.apache.spark.sql.connector.expressions中的接口
Represents a field or column reference in the public logical expression API.
namedThreadFactory(String) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Create a thread factory that names threads with a prefix and also sets the threads to daemon.
NamedTransform - org.apache.spark.sql.connector.expressions中的类
Convenience extractor for any Transform.
NamedTransform() - 类 的构造器org.apache.spark.sql.connector.expressions.NamedTransform
 
names() - 接口 中的方法org.apache.spark.metrics.ExecutorMetricType
 
names() - 类 中的静态方法org.apache.spark.metrics.GarbageCollectionMetrics
 
names() - 类 中的静态方法org.apache.spark.metrics.ProcessTreeMetrics
 
names() - 接口 中的方法org.apache.spark.metrics.SingleValueExecutorMetricType
 
names() - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
An array of feature names to select features from a vector column.
names() - 类 中的方法org.apache.spark.sql.types.StructType
Returns all field names in an array.
namespace() - 接口 中的方法org.apache.spark.sql.connector.catalog.Identifier
 
NamespaceChange - org.apache.spark.sql.connector.catalog中的接口
NamespaceChange subclasses represent requested changes to a namespace.
NamespaceChange.RemoveProperty - org.apache.spark.sql.connector.catalog中的类
A NamespaceChange to remove a namespace property.
NamespaceChange.SetProperty - org.apache.spark.sql.connector.catalog中的类
A NamespaceChange to set a namespace property.
namespaceExists(String[]) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
namespaceExists(String[]) - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsNamespaces
Test whether a namespace exists.
NamespaceHelper(String[]) - 类 的构造器org.apache.spark.sql.connector.catalog.CatalogV2Implicits.NamespaceHelper
 
nameToObjectMap() - 类 中的静态方法org.apache.spark.mllib.stat.correlation.CorrelationNames
 
nanoTime() - 接口 中的方法org.apache.spark.util.Clock
Current value of high resolution time source, in ns.
nanSafeCompareDoubles(double, double) - 类 中的静态方法org.apache.spark.util.Utils
NaN-safe version of java.lang.Double.compare() which allows NaN values to be compared according to semantics where NaN == NaN and NaN is greater than any non-NaN double.
nanSafeCompareFloats(float, float) - 类 中的静态方法org.apache.spark.util.Utils
NaN-safe version of java.lang.Float.compare() which allows NaN values to be compared according to semantics where NaN == NaN and NaN is greater than any non-NaN float.
nanvl(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns col1 if it is not NaN, or col2 if col1 is NaN.
NarrowDependency<T> - org.apache.spark中的类
:: DeveloperApi :: Base class for dependencies where each partition of the child RDD depends on a small number of partitions of the parent RDD.
NarrowDependency(RDD<T>) - 类 的构造器org.apache.spark.NarrowDependency
 
ndcgAt(int) - 类 中的方法org.apache.spark.mllib.evaluation.RankingMetrics
Compute the average NDCG value of all the queries, truncated at ranking position k.
needConversion() - 类 中的方法org.apache.spark.sql.sources.BaseRelation
Whether does it need to convert the objects in Row to internal representation, for example: java.lang.String to UTF8String java.lang.Decimal to Decimal If needConversion is false, buildScan() should return an RDD of InternalRow
needsReconfiguration() - 接口 中的方法org.apache.spark.sql.connector.read.streaming.ContinuousStream
The execution engine will call this method in every epoch to determine if new input partitions need to be generated, which may be required if for example the underlying source system has had partitions added or removed.
negate(Column) - 类 中的静态方法org.apache.spark.sql.functions
Unary minus, i.e. negate the expression.
negate(byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
negate(Decimal) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
negate(Decimal) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
negate(double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
negate(float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
negate(int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
negate(long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
negate(short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
Network - org.apache.spark.internal.config中的类
 
Network() - 类 的构造器org.apache.spark.internal.config.Network
 
newAccumulatorInfos(Iterable<AccumulableInfo>) - 类 中的静态方法org.apache.spark.status.LiveEntityHelpers
 
newAPIHadoopFile(String, Class<F>, Class<K>, Class<V>, Configuration) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
newAPIHadoopFile(String, ClassTag<K>, ClassTag<V>, ClassTag<F>) - 类 中的方法org.apache.spark.SparkContext
Smarter version of newApiHadoopFile that uses class tags to figure out the classes of keys, values and the org.apache.hadoop.mapreduce.InputFormat (new MapReduce API) so that user don't need to pass them directly.
newAPIHadoopFile(String, Class<F>, Class<K>, Class<V>, Configuration) - 类 中的方法org.apache.spark.SparkContext
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
newAPIHadoopRDD(Configuration, Class<F>, Class<K>, Class<V>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
newAPIHadoopRDD(Configuration, Class<F>, Class<K>, Class<V>) - 类 中的方法org.apache.spark.SparkContext
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
newBooleanArrayEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newBooleanEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newBooleanSeqEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
已过时。
use newSequenceEncoder
newBoxedBooleanEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newBoxedByteEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newBoxedDoubleEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newBoxedFloatEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newBoxedIntEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newBoxedLongEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newBoxedShortEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newBroadcast(T, boolean, long, ClassTag<T>) - 接口 中的方法org.apache.spark.broadcast.BroadcastFactory
Creates a new broadcast variable.
newByteArrayEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newByteEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newByteSeqEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
已过时。
use newSequenceEncoder
newComment() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnComment
 
newDaemonCachedThreadPool(String) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Wrapper over newCachedThreadPool.
newDaemonCachedThreadPool(String, int, int) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Create a cached thread pool whose max number of threads is maxThreadNumber.
newDaemonFixedThreadPool(int, String) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Wrapper over newFixedThreadPool.
newDaemonSingleThreadExecutor(String) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Wrapper over newSingleThreadExecutor.
newDaemonSingleThreadScheduledExecutor(String) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Wrapper over ScheduledThreadPoolExecutor.
newDaemonThreadPoolScheduledExecutor(String, int) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Wrapper over ScheduledThreadPoolExecutor.
newDataType() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType
 
newDateEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newDoubleArrayEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newDoubleEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newDoubleSeqEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
已过时。
use newSequenceEncoder
newFloatArrayEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newFloatEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newFloatSeqEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
已过时。
use newSequenceEncoder
newForkJoinPool(String, int) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Construct a new ForkJoinPool with a specified max parallelism and name prefix.
NewHadoopMapPartitionsWithSplitRDD$() - 类 的构造器org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD$
 
NewHadoopRDD<K,V> - org.apache.spark.rdd中的类
:: DeveloperApi :: An RDD that provides core functionality for reading data stored in Hadoop (e.g., files in HDFS, sources in HBase, or S3), using the new MapReduce API (org.apache.hadoop.mapreduce).
NewHadoopRDD(SparkContext, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, Configuration) - 类 的构造器org.apache.spark.rdd.NewHadoopRDD
 
NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD$ - org.apache.spark.rdd中的类
 
newId() - 类 中的静态方法org.apache.spark.util.AccumulatorContext
Returns a globally unique ID for a new AccumulatorV2.
newInstance() - 类 中的方法org.apache.spark.serializer.JavaSerializer
 
newInstance() - 类 中的方法org.apache.spark.serializer.KryoSerializer
 
newInstance() - 类 中的方法org.apache.spark.serializer.Serializer
Creates a new SerializerInstance.
newInstantEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newIntArrayEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newIntEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newIntSeqEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
已过时。
use newSequenceEncoder
newJavaDecimalEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newKryo() - 类 中的方法org.apache.spark.serializer.KryoSerializer
 
newKryoOutput() - 类 中的方法org.apache.spark.serializer.KryoSerializer
 
newLocalDateEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newLongArrayEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newLongEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newLongSeqEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
已过时。
use newSequenceEncoder
newMapEncoder(TypeTags.TypeTag<T>) - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newName() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.RenameColumn
 
newProductArrayEncoder(TypeTags.TypeTag<A>) - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newProductEncoder(TypeTags.TypeTag<T>) - 接口 中的方法org.apache.spark.sql.LowPrioritySQLImplicits
 
newProductSeqEncoder(TypeTags.TypeTag<A>) - 类 中的方法org.apache.spark.sql.SQLImplicits
已过时。
use newSequenceEncoder
newScalaDecimalEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newScanBuilder(CaseInsensitiveStringMap) - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsRead
Returns a ScanBuilder which can be used to build a Scan.
newSequenceEncoder(TypeTags.TypeTag<T>) - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newSession() - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Return a HiveClient as new session, that will share the class loader and Hive client
newSession() - 类 中的方法org.apache.spark.sql.SparkSession
Start a new session with isolated SQL configurations, temporary tables, registered functions are isolated, but sharing the underlying SparkContext and cached data.
newSession() - 类 中的方法org.apache.spark.sql.SQLContext
Returns a SQLContext as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same SparkContext, cached data and other things.
newSetEncoder(TypeTags.TypeTag<T>) - 类 中的方法org.apache.spark.sql.SQLImplicits
Notice that we serialize Set to Catalyst array.
newShortArrayEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newShortEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newShortSeqEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
已过时。
use newSequenceEncoder
newStringArrayEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newStringEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newStringSeqEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
已过时。
use newSequenceEncoder
newTaskTempFile(TaskAttemptContext, Option<String>, String) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Notifies the commit protocol to add a new file, and gets back the full path that should be used.
newTaskTempFile(TaskAttemptContext, Option<String>, String) - 类 中的方法org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
 
newTaskTempFileAbsPath(TaskAttemptContext, String, String) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Similar to newTaskTempFile(), but allows files to committed to an absolute output location.
newTaskTempFileAbsPath(TaskAttemptContext, String, String) - 类 中的方法org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
 
newTemporaryConfiguration(boolean) - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
Constructs a configuration for hive, where the metastore is located in a temp directory.
newTimeStampEncoder() - 类 中的方法org.apache.spark.sql.SQLImplicits
 
newVersionExternalTempPath(Path, Configuration, String) - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
newWriteBuilder(CaseInsensitiveStringMap) - 接口 中的方法org.apache.spark.sql.connector.catalog.SupportsWrite
Returns a WriteBuilder which can be used to create BatchWrite.
next() - 类 中的方法org.apache.spark.InterruptibleIterator
 
next() - 接口 中的方法org.apache.spark.mllib.clustering.LDAOptimizer
 
next() - 接口 中的方法org.apache.spark.sql.connector.read.PartitionReader
Proceed to next record, returns false if there is no more records.
next() - 类 中的方法org.apache.spark.status.LiveRDDPartition
 
next_day(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Returns the first date which is later than the value of the date column that is on the specified day of the week.
nextValue() - 类 中的方法org.apache.spark.mllib.random.ExponentialGenerator
 
nextValue() - 类 中的方法org.apache.spark.mllib.random.GammaGenerator
 
nextValue() - 类 中的方法org.apache.spark.mllib.random.LogNormalGenerator
 
nextValue() - 类 中的方法org.apache.spark.mllib.random.PoissonGenerator
 
nextValue() - 接口 中的方法org.apache.spark.mllib.random.RandomDataGenerator
Returns an i.i.d. sample as a generic type from an underlying distribution.
nextValue() - 类 中的方法org.apache.spark.mllib.random.StandardNormalGenerator
 
nextValue() - 类 中的方法org.apache.spark.mllib.random.UniformGenerator
 
nextValue() - 类 中的方法org.apache.spark.mllib.random.WeibullGenerator
 
NGram - org.apache.spark.ml.feature中的类
A feature transformer that converts the input array of strings into an array of n-grams.
NGram(String) - 类 的构造器org.apache.spark.ml.feature.NGram
 
NGram() - 类 的构造器org.apache.spark.ml.feature.NGram
 
NioBufferedFileInputStream - org.apache.spark.io中的类
InputStream implementation which uses direct buffer to read a file to avoid extra copy of data between Java and native memory which happens when using BufferedInputStream.
NioBufferedFileInputStream(File, int) - 类 的构造器org.apache.spark.io.NioBufferedFileInputStream
 
NioBufferedFileInputStream(File) - 类 的构造器org.apache.spark.io.NioBufferedFileInputStream
 
NNLS - org.apache.spark.mllib.optimization中的类
Object used to solve nonnegative least squares problems using a modified projected gradient method.
NNLS() - 类 的构造器org.apache.spark.mllib.optimization.NNLS
 
NNLS.Workspace - org.apache.spark.mllib.optimization中的类
 
NO_PREF() - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
NO_RESOURCE - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
A special value for the resource that tells Spark to not try to process the app resource as a file.
Node - org.apache.spark.ml.tree中的类
Decision tree node interface.
Node() - 类 的构造器org.apache.spark.ml.tree.Node
 
Node - org.apache.spark.mllib.tree.model中的类
:: DeveloperApi :: Node in a decision tree.
Node(int, Predict, double, boolean, Option<Split>, Option<Node>, Option<Node>, Option<InformationGainStats>) - 类 的构造器org.apache.spark.mllib.tree.model.Node
 
node() - 类 中的方法org.apache.spark.scheduler.BlacklistedExecutor
 
NODE_LOCAL() - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
nodeBlacklist() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
 
NodeData(int, double, double, double[], long, double, int, int, DecisionTreeModelReadWrite.SplitData) - 类 的构造器org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
nodeData() - 类 中的方法org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
 
NodeData(int, int, org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData, double, boolean, Option<org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData>, Option<Object>, Option<Object>, Option<Object>) - 类 的构造器org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
NodeData$() - 类 的构造器org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
 
NodeData$() - 类 的构造器org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
 
nodeId() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
noLocality() - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
Nominal() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeType
Nominal type.
NominalAttribute - org.apache.spark.ml.attribute中的类
:: DeveloperApi :: A nominal attribute.
NONE - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
None - 类 中的静态变量org.apache.spark.graphx.TripletFields
None of the triplet fields are exposed.
NONE() - 类 中的静态方法org.apache.spark.scheduler.SchedulingMode
 
NONE() - 类 中的静态方法org.apache.spark.storage.StorageLevel
Various StorageLevel defined and utility functions for creating new storage levels.
nonLocalPaths(String, boolean) - 类 中的静态方法org.apache.spark.util.Utils
Return all non-local paths from a comma-separated list of paths.
nonnegative() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
nonnegative() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Param for whether to apply nonnegativity constraints.
nonNegativeHash(Object) - 类 中的静态方法org.apache.spark.util.Utils
 
nonNegativeMod(int, int) - 类 中的静态方法org.apache.spark.util.Utils
 
NoopDialect - org.apache.spark.sql.jdbc中的类
NOOP dialect object, always returning the neutral element.
NoopDialect() - 类 的构造器org.apache.spark.sql.jdbc.NoopDialect
 
norm(Vector, double) - 类 中的静态方法org.apache.spark.ml.linalg.Vectors
Returns the p-norm of this vector.
norm(Vector, double) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Returns the p-norm of this vector.
NormalEquationSolver - org.apache.spark.ml.optim中的接口
Interface for classes that solve the normal equations locally.
normalizeDuration(long) - 类 中的静态方法org.apache.spark.streaming.ui.UIUtils
Find the best TimeUnit for converting milliseconds to a friendly string.
Normalizer - org.apache.spark.ml.feature中的类
Normalize a vector to have unit norm using the given p-norm.
Normalizer(String) - 类 的构造器org.apache.spark.ml.feature.Normalizer
 
Normalizer() - 类 的构造器org.apache.spark.ml.feature.Normalizer
 
Normalizer - org.apache.spark.mllib.feature中的类
Normalizes samples individually to unit L^p^ norm For any 1 &lt;= p &lt; Double.PositiveInfinity, normalizes samples using sum(abs(vector).
Normalizer(double) - 类 的构造器org.apache.spark.mllib.feature.Normalizer
 
Normalizer() - 类 的构造器org.apache.spark.mllib.feature.Normalizer
 
normalizeToProbabilitiesInPlace(DenseVector) - 类 中的静态方法org.apache.spark.ml.classification.ProbabilisticClassificationModel
Normalize a vector of raw predictions to be a multinomial probability vector, in place.
normalJavaRDD(JavaSparkContext, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.normalRDD.
normalJavaRDD(JavaSparkContext, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.normalJavaRDD with the default seed.
normalJavaRDD(JavaSparkContext, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.normalJavaRDD with the default number of partitions and the default seed.
normalJavaVectorRDD(JavaSparkContext, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.normalVectorRDD.
normalJavaVectorRDD(JavaSparkContext, long, int, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.normalJavaVectorRDD with the default seed.
normalJavaVectorRDD(JavaSparkContext, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.normalJavaVectorRDD with the default number of partitions and the default seed.
normalRDD(SparkContext, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD comprised of i.i.d.
normalVectorRDD(SparkContext, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD[Vector] with vectors containing i.i.d.
normL1(Column, Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
normL1(Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
normL1() - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
L1 norm of each dimension.
normL1() - 接口 中的方法org.apache.spark.mllib.stat.MultivariateStatisticalSummary
L1 norm of each column
normL2(Column, Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
normL2(Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
normL2() - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
L2 (Euclidean) norm of each dimension.
normL2() - 接口 中的方法org.apache.spark.mllib.stat.MultivariateStatisticalSummary
Euclidean magnitude of each column
normPdf(double, double, double, double) - 类 中的静态方法org.apache.spark.mllib.stat.KernelDensity
Evaluates the PDF of a normal distribution.
NoSuccess() - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
not(Function0<Parsers.Parser<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
not(Column) - 类 中的静态方法org.apache.spark.sql.functions
Inversion of boolean expression, i.e.
Not - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff child is evaluated to false.
Not(Filter) - 类 的构造器org.apache.spark.sql.sources.Not
 
notEqual(Object) - 类 中的方法org.apache.spark.sql.Column
Inequality test.
notifyPartitionCompletion(int, int) - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
NoTimeout() - 类 中的静态方法org.apache.spark.sql.streaming.GroupStateTimeout
No timeout.
ntile(int) - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the ntile group id (from 1 to n inclusive) in an ordered window partition.
nullable() - 类 中的方法org.apache.spark.sql.catalog.Column
 
nullable() - 类 中的方法org.apache.spark.sql.expressions.UserDefinedFunction
Returns true when the UDF can return a nullable value.
nullable() - 类 中的方法org.apache.spark.sql.types.StructField
 
nullDeviance() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
nullHypothesis() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTestResult
 
nullHypothesis() - 类 中的方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
 
nullHypothesis() - 接口 中的方法org.apache.spark.mllib.stat.test.StreamingTestMethod
 
nullHypothesis() - 类 中的静态方法org.apache.spark.mllib.stat.test.StudentTTest
 
nullHypothesis() - 接口 中的方法org.apache.spark.mllib.stat.test.TestResult
Null hypothesis of the test.
nullHypothesis() - 类 中的静态方法org.apache.spark.mllib.stat.test.WelchTTest
 
NullHypothesis$() - 类 的构造器org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
 
NullHypothesis$() - 类 的构造器org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis$
 
NullType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the NullType object.
NullType - org.apache.spark.sql.types中的类
The data type representing NULL values.
NullType() - 类 的构造器org.apache.spark.sql.types.NullType
 
NUM_ATTRIBUTES() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
NUM_PARTITIONS() - 类 中的静态方法org.apache.spark.ui.UIWorkloadGenerator
 
NUM_REPLAY_THREADS() - 类 中的静态方法org.apache.spark.internal.config.History
 
NUM_VALUES() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
numAccums() - 类 中的静态方法org.apache.spark.util.AccumulatorContext
Returns the number of accumulators registered.
numActiveBatches() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
numActiveOutputOps() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
numActiveReceivers() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
numActives() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
numActives() - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
numActives() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Find the number of values stored explicitly.
numActives() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
numActives() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
numActives() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Number of active entries.
numActives() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
numActives() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
numActives() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Find the number of values stored explicitly.
numActives() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
numActives() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
numActives() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Number of active entries.
numActiveStages() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numActiveTasks() - 接口 中的方法org.apache.spark.SparkStageInfo
 
numActiveTasks() - 类 中的方法org.apache.spark.SparkStageInfoImpl
 
numActiveTasks() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numActiveTasks() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
numAttributes() - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
 
numAvailableOutputs() - 类 中的方法org.apache.spark.ShuffleStatus
Number of partitions that have shuffle outputs.
numBins() - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
param for number of bins to down-sample the curves (ROC curve, PR curve) in area computation.
numBins() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
 
numBuckets() - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
numBuckets() - 接口 中的方法org.apache.spark.ml.feature.QuantileDiscretizerBase
Number of buckets (quantiles, or categories) into which data points are grouped.
numBucketsArray() - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
numBucketsArray() - 接口 中的方法org.apache.spark.ml.feature.QuantileDiscretizerBase
Array of number of buckets (quantiles, or categories) into which data points are grouped.
numCachedPartitions() - 类 中的方法org.apache.spark.status.api.v1.RDDStorageInfo
 
numCachedPartitions() - 类 中的方法org.apache.spark.storage.RDDInfo
 
numCategories() - 类 中的方法org.apache.spark.ml.tree.CategoricalSplit
 
numCategories() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
 
numClasses() - 类 中的方法org.apache.spark.ml.classification.ClassificationModel
Number of classes (values which the label can take).
numClasses() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
numClasses() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
numClasses() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
numClasses() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
numClasses() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
numClasses() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
numClasses() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
numClasses() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
numClasses() - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionModel
 
numClasses() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
numColBlocks() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
numCols() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
numCols() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Number of columns.
numCols() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
numCols() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
numCols() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
numCols() - 类 中的方法org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
Gets or computes the number of columns.
numCols() - 接口 中的方法org.apache.spark.mllib.linalg.distributed.DistributedMatrix
Gets or computes the number of columns.
numCols() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
 
numCols() - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Gets or computes the number of columns.
numCols() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Number of columns.
numCols() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
numCols() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarBatch
Returns the number of columns that make up this batch.
numCompletedIndices() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numCompletedIndices() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
numCompletedOutputOps() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
numCompletedStages() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numCompletedTasks() - 接口 中的方法org.apache.spark.SparkStageInfo
 
numCompletedTasks() - 类 中的方法org.apache.spark.SparkStageInfoImpl
 
numCompletedTasks() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numCompleteTasks() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
numDocs() - 类 中的方法org.apache.spark.ml.feature.IDFModel
Returns number of documents evaluated to compute idf
numDocs() - 类 中的方法org.apache.spark.mllib.feature.IDFModel
 
numEdges() - 类 中的方法org.apache.spark.graphx.GraphOps
 
numElements() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
numElements() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarMap
 
Numeric() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeType
Numeric type.
NumericAttribute - org.apache.spark.ml.attribute中的类
:: DeveloperApi :: A numeric attribute with optional summary statistics.
NumericParser - org.apache.spark.mllib.util中的类
Simple parser for a numeric structure consisting of three types: - number: a double in Java's floating number format - array: an array of numbers stored as [v0,v1,...
NumericParser() - 类 的构造器org.apache.spark.mllib.util.NumericParser
 
numericRDDToDoubleRDDFunctions(RDD<T>, Numeric<T>) - 类 中的静态方法org.apache.spark.rdd.RDD
 
NumericType - org.apache.spark.sql.types中的类
Numeric data types.
NumericType() - 类 的构造器org.apache.spark.sql.types.NumericType
 
numFailedOutputOps() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
numFailedStages() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numFailedTasks() - 接口 中的方法org.apache.spark.SparkStageInfo
 
numFailedTasks() - 类 中的方法org.apache.spark.SparkStageInfoImpl
 
numFailedTasks() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numFailedTasks() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
numFeatures() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
numFeatures() - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
numFeatures() - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
numFeatures() - 接口 中的方法org.apache.spark.ml.param.shared.HasNumFeatures
Param for Number of features.
numFeatures() - 类 中的方法org.apache.spark.ml.PredictionModel
Returns the number of features the model was trained on.
numFeatures() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
numFeatures() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
numFeatures() - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionModel
 
numFeatures() - 类 中的方法org.apache.spark.mllib.feature.HashingTF
 
numFields() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
numFolds() - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
numFolds() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
numFolds() - 接口 中的方法org.apache.spark.ml.tuning.CrossValidatorParams
Param for number of folds for cross validation.
numHashTables() - 接口 中的方法org.apache.spark.ml.feature.LSHParams
Param for the number of hash tables used in LSH OR-amplification.
numInactiveReceivers() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
numInputRows() - 类 中的方法org.apache.spark.sql.streaming.SourceProgress
 
numInputRows() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
The aggregate (across all sources) number of records processed in a trigger.
numInstances() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
numInstances() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
numItemBlocks() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
numItemBlocks() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Param for number of item blocks (positive).
numIter() - 类 中的方法org.apache.spark.ml.clustering.ClusteringSummary
 
numIterations() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
 
numIterations() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
numKilledTasks() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numKilledTasks() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
numNodes() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeModel
Number of nodes in tree, including leaf nodes.
numNodes() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
Get number of nodes in tree, including leaf nodes.
numNonzeros() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
numNonzeros() - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
numNonzeros() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Find the number of non-zero active values.
numNonzeros() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
numNonzeros() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
numNonzeros() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Number of nonzero elements.
numNonZeros(Column, Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
numNonZeros(Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
numNonzeros() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
numNonzeros() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
numNonzeros() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Find the number of non-zero active values.
numNonzeros() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
numNonzeros() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
numNonzeros() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Number of nonzero elements.
numNonzeros() - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Number of nonzero elements in each dimension.
numNonzeros() - 接口 中的方法org.apache.spark.mllib.stat.MultivariateStatisticalSummary
Number of nonzero elements (including explicitly presented zero values) in each column.
numNulls() - 类 中的方法org.apache.spark.sql.vectorized.ArrowColumnVector
 
numNulls() - 类 中的方法org.apache.spark.sql.vectorized.ColumnVector
Returns the number of nulls in this column vector.
numOfPoints() - 类 中的方法org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
 
numOutputRows() - 类 中的方法org.apache.spark.sql.streaming.SinkProgress
 
numPartitions() - 类 中的方法org.apache.spark.HashPartitioner
 
numPartitions() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
numPartitions() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
Number of partitions for sentences of words.
numPartitions() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
numPartitions() - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
numPartitions() - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
numPartitions() - 接口 中的方法org.apache.spark.ml.fpm.FPGrowthParams
Number of partitions (at least 1) used by parallel FP-growth.
numPartitions() - 类 中的方法org.apache.spark.Partitioner
 
numPartitions() - 类 中的方法org.apache.spark.RangePartitioner
 
numPartitions() - 类 中的方法org.apache.spark.rdd.PartitionGroup
 
numPartitions() - 接口 中的方法org.apache.spark.sql.connector.read.partitioning.Partitioning
Returns the number of partitions(i.e., InputPartitions) the data source outputs.
numPartitions() - 类 中的方法org.apache.spark.status.api.v1.RDDStorageInfo
 
numPartitions() - 类 中的方法org.apache.spark.storage.RDDInfo
 
numPartitions(int) - 类 中的方法org.apache.spark.streaming.StateSpec
Set the number of partitions by which the state RDDs generated by mapWithState will be partitioned.
numProcessedRecords() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
numReceivedRecords() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
numReceivers() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
numRecords() - 接口 中的方法org.apache.spark.streaming.receiver.ReceivedBlockStoreResult
 
numRecords() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
The number of recorders received by the receivers in this batch.
numRecords() - 类 中的方法org.apache.spark.streaming.scheduler.StreamInputInfo
 
numRetainedCompletedBatches() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
numRetries(SparkConf) - 类 中的静态方法org.apache.spark.util.RpcUtils
Returns the configured number of times to retry connecting
numRowBlocks() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
numRows() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
numRows() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Number of rows.
numRows() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
numRows() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
numRows() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
numRows() - 类 中的方法org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
Gets or computes the number of rows.
numRows() - 接口 中的方法org.apache.spark.mllib.linalg.distributed.DistributedMatrix
Gets or computes the number of rows.
numRows() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
 
numRows() - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Gets or computes the number of rows.
numRows() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Number of rows.
numRows() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
numRows() - 接口 中的方法org.apache.spark.sql.connector.read.Statistics
 
numRows() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarBatch
Returns the number of rows for read, including filtered rows.
numRowsTotal() - 类 中的方法org.apache.spark.sql.streaming.StateOperatorProgress
 
numRowsUpdated() - 类 中的方法org.apache.spark.sql.streaming.StateOperatorProgress
 
numRunningTasks() - 接口 中的方法org.apache.spark.SparkExecutorInfo
 
numRunningTasks() - 类 中的方法org.apache.spark.SparkExecutorInfoImpl
 
numSkippedStages() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numSkippedTasks() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numSpilledStages() - 类 中的方法org.apache.spark.SpillListener
 
numStreamBlocks() - 类 中的方法org.apache.spark.ui.storage.ExecutorStreamSummary
 
numTasks() - 类 中的方法org.apache.spark.scheduler.StageInfo
 
numTasks() - 接口 中的方法org.apache.spark.SparkStageInfo
 
numTasks() - 类 中的方法org.apache.spark.SparkStageInfoImpl
 
numTasks() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
numTasks() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
numTopFeatures() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
numTopFeatures() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
numTopFeatures() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
Number of features that selector will select, ordered by ascending p-value.
numTopFeatures() - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
numTotalCompletedBatches() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
numTotalOutputOps() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
numTrees() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
Number of trees in ensemble
numTrees() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
numTrees() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
numTrees() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
Number of trees in ensemble
numTrees() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
numTrees() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
numTrees() - 接口 中的方法org.apache.spark.ml.tree.RandomForestParams
Number of trees to train (at least 1).
numUserBlocks() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
numUserBlocks() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Param for number of user blocks (positive).
numValues() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
numVertices() - 类 中的方法org.apache.spark.graphx.GraphOps
 

O

obj() - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol.TaskCommitMessage
 
objectFile(String, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
objectFile(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
objectFile(String, int, ClassTag<T>) - 类 中的方法org.apache.spark.SparkContext
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
objectiveHistory() - 类 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummaryImpl
 
objectiveHistory() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionTrainingSummary
objective function (scaled loss + regularization) at each iteration.
objectiveHistory() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionTrainingSummaryImpl
 
objectiveHistory() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionTrainingSummary
 
ObjectStreamClassMethods(ObjectStreamClass) - 类 的构造器org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
 
ObjectStreamClassMethods$() - 类 的构造器org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods$
 
ObjectType - org.apache.spark.sql.types中的类
 
ObjectType(Class<?>) - 类 的构造器org.apache.spark.sql.types.ObjectType
 
obtainDelegationTokens(Configuration, SparkConf, Credentials) - 接口 中的方法org.apache.spark.security.HadoopDelegationTokenProvider
Obtain delegation tokens for this service and get the time of the next renewal.
ocvTypes() - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
(Scala-specific) OpenCV type mapping supported
of(T) - 类 中的静态方法org.apache.spark.api.java.Optional
 
of(RDD<Tuple2<Object, Object>>) - 类 中的静态方法org.apache.spark.mllib.evaluation.AreaUnderCurve
Returns the area under the given curve.
of(Iterable<Tuple2<Object, Object>>) - 类 中的静态方法org.apache.spark.mllib.evaluation.AreaUnderCurve
Returns the area under the given curve.
of(JavaRDD<Tuple2<T, T>>) - 类 中的静态方法org.apache.spark.mllib.evaluation.RankingMetrics
Creates a RankingMetrics instance (for Java users).
of(String[], String) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.Identifier
 
OFF_HEAP - 类 中的静态变量org.apache.spark.api.java.StorageLevels
 
OFF_HEAP() - 类 中的静态方法org.apache.spark.storage.StorageLevel
 
OffHeapExecutionMemory - org.apache.spark.metrics中的类
 
OffHeapExecutionMemory() - 类 的构造器org.apache.spark.metrics.OffHeapExecutionMemory
 
offHeapMemoryRemaining() - 类 中的方法org.apache.spark.status.api.v1.RDDDataDistribution
 
offHeapMemoryUsed() - 类 中的方法org.apache.spark.status.api.v1.RDDDataDistribution
 
OffHeapStorageMemory - org.apache.spark.metrics中的类
 
OffHeapStorageMemory() - 类 的构造器org.apache.spark.metrics.OffHeapStorageMemory
 
OffHeapUnifiedMemory - org.apache.spark.metrics中的类
 
OffHeapUnifiedMemory() - 类 的构造器org.apache.spark.metrics.OffHeapUnifiedMemory
 
offHeapUsed() - 类 中的方法org.apache.spark.status.LiveRDDDistribution
 
Offset - org.apache.spark.sql.connector.read.streaming中的类
An abstract representation of progress through a MicroBatchStream or ContinuousStream.
Offset() - 类 的构造器org.apache.spark.sql.connector.read.streaming.Offset
 
offsetBytes(String, long, long, long) - 类 中的静态方法org.apache.spark.util.Utils
Return a string containing part of a file from byte 'start' to 'end'.
offsetBytes(Seq<File>, Seq<Object>, long, long) - 类 中的静态方法org.apache.spark.util.Utils
Return a string containing data across a set of files.
offsetCol() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
offsetCol() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
Param for offset column name.
offsetCol() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
ofNullable(T) - 类 中的静态方法org.apache.spark.api.java.Optional
 
ofRows(SparkSession, LogicalPlan) - 类 中的静态方法org.apache.spark.sql.Dataset
 
ofRows(SparkSession, LogicalPlan, QueryPlanningTracker) - 类 中的静态方法org.apache.spark.sql.Dataset
A variant of ofRows that allows passing in a tracker so we can track query parsing time.
oldVersionExternalTempPath(Path, Configuration, String) - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
on(Function1<U, T>) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
on(Function1<U, T>) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
on(Function1<U, T>) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
on(Function1<U, T>) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
on(Function1<U, T>) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
on(Function1<U, T>) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
on(Function1<U, T>) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
onAddData(Object, Object) - 接口 中的方法org.apache.spark.streaming.receiver.BlockGeneratorListener
Called after a data item is added into the BlockGenerator.
onApplicationEnd(SparkListenerApplicationEnd) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onApplicationEnd(SparkListenerApplicationEnd) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the application ends
onApplicationEnd(SparkListenerApplicationEnd) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onApplicationStart(SparkListenerApplicationStart) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onApplicationStart(SparkListenerApplicationStart) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the application starts
onApplicationStart(SparkListenerApplicationStart) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onBatchCompleted(JavaStreamingListenerBatchCompleted) - 接口 中的方法org.apache.spark.streaming.api.java.PythonStreamingListener
Called when processing of a batch of jobs has completed.
onBatchCompleted(StreamingListenerBatchCompleted) - 类 中的方法org.apache.spark.streaming.scheduler.StatsReportListener
 
onBatchCompleted(StreamingListenerBatchCompleted) - 接口 中的方法org.apache.spark.streaming.scheduler.StreamingListener
Called when processing of a batch of jobs has completed.
onBatchStarted(JavaStreamingListenerBatchStarted) - 接口 中的方法org.apache.spark.streaming.api.java.PythonStreamingListener
Called when processing of a batch of jobs has started.
onBatchStarted(StreamingListenerBatchStarted) - 接口 中的方法org.apache.spark.streaming.scheduler.StreamingListener
Called when processing of a batch of jobs has started.
onBatchSubmitted(JavaStreamingListenerBatchSubmitted) - 接口 中的方法org.apache.spark.streaming.api.java.PythonStreamingListener
Called when a batch of jobs has been submitted for processing.
onBatchSubmitted(StreamingListenerBatchSubmitted) - 接口 中的方法org.apache.spark.streaming.scheduler.StreamingListener
Called when a batch of jobs has been submitted for processing.
onBlockManagerAdded(SparkListenerBlockManagerAdded) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onBlockManagerAdded(SparkListenerBlockManagerAdded) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when a new block manager has joined
onBlockManagerAdded(SparkListenerBlockManagerAdded) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onBlockManagerRemoved(SparkListenerBlockManagerRemoved) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onBlockManagerRemoved(SparkListenerBlockManagerRemoved) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when an existing block manager has been removed
onBlockManagerRemoved(SparkListenerBlockManagerRemoved) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onBlockUpdated(SparkListenerBlockUpdated) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onBlockUpdated(SparkListenerBlockUpdated) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver receives a block update info.
onBlockUpdated(SparkListenerBlockUpdated) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
Once() - 类 中的静态方法org.apache.spark.sql.streaming.Trigger
A trigger that process only one batch of data in a streaming query then terminates the query.
OnceParser(Function1<Reader<Object>, Parsers.ParseResult<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
onComplete(Function1<Try<T>, U>, ExecutionContext) - 类 中的方法org.apache.spark.ComplexFutureAction
 
onComplete(Function1<Try<T>, U>, ExecutionContext) - 接口 中的方法org.apache.spark.FutureAction
When this action is completed, either through an exception, or a value, applies the provided function.
onComplete(Function1<R, BoxedUnit>) - 类 中的方法org.apache.spark.partial.PartialResult
Set a handler to be called when this PartialResult completes.
onComplete(Function1<Try<T>, U>, ExecutionContext) - 类 中的方法org.apache.spark.SimpleFutureAction
 
onComplete(TaskContext) - 类 中的方法org.apache.spark.storage.ShuffleFetchCompletionListener
 
onConnected(RpcAddress) - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
Invoked when remoteAddress is connected to the current node.
onDataWriterCommit(WriterCommitMessage) - 接口 中的方法org.apache.spark.sql.connector.write.BatchWrite
Handles a commit message on receiving from a successful data writer.
onDisconnected(RpcAddress) - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
Invoked when remoteAddress is lost.
one() - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
one() - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
one() - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
one() - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
one() - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
one() - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
one() - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
OneHotEncoder - org.apache.spark.ml.feature中的类
A one-hot encoder that maps a column of category indices to a column of binary vectors, with at most a single one-value per row that indicates the input category index.
OneHotEncoder(String) - 类 的构造器org.apache.spark.ml.feature.OneHotEncoder
 
OneHotEncoder() - 类 的构造器org.apache.spark.ml.feature.OneHotEncoder
 
OneHotEncoderBase - org.apache.spark.ml.feature中的接口
Private trait for params and common methods for OneHotEncoder and OneHotEncoderModel
OneHotEncoderCommon - org.apache.spark.ml.feature中的类
Provides some helper methods used by OneHotEncoder.
OneHotEncoderCommon() - 类 的构造器org.apache.spark.ml.feature.OneHotEncoderCommon
 
OneHotEncoderModel - org.apache.spark.ml.feature中的类
param: categorySizes Original number of categories for each feature being encoded.
onEnvironmentUpdate(SparkListenerEnvironmentUpdate) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onEnvironmentUpdate(SparkListenerEnvironmentUpdate) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when environment properties have been updated
onEnvironmentUpdate(SparkListenerEnvironmentUpdate) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onError(Throwable) - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
Invoked when any exception is thrown during handling messages.
onError(String, Throwable) - 接口 中的方法org.apache.spark.streaming.receiver.BlockGeneratorListener
Called when an error has occurred in the BlockGenerator.
ones(int, int) - 类 中的静态方法org.apache.spark.ml.linalg.DenseMatrix
Generate a DenseMatrix consisting of ones.
ones(int, int) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Generate a DenseMatrix consisting of ones.
ones(int, int) - 类 中的静态方法org.apache.spark.mllib.linalg.DenseMatrix
Generate a DenseMatrix consisting of ones.
ones(int, int) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Generate a DenseMatrix consisting of ones.
OneSampleTwoSided() - 类 中的方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis$
 
OneToOneDependency<T> - org.apache.spark中的类
:: DeveloperApi :: Represents a one-to-one dependency between partitions of the parent and child RDDs.
OneToOneDependency(RDD<T>) - 类 的构造器org.apache.spark.OneToOneDependency
 
onEvent(SparkListenerEvent) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
OneVsRest - org.apache.spark.ml.classification中的类
Reduction of Multiclass Classification to Binary Classification.
OneVsRest(String) - 类 的构造器org.apache.spark.ml.classification.OneVsRest
 
OneVsRest() - 类 的构造器org.apache.spark.ml.classification.OneVsRest
 
OneVsRestModel - org.apache.spark.ml.classification中的类
Model produced by OneVsRest.
OneVsRestParams - org.apache.spark.ml.classification中的接口
Params for OneVsRest.
onExecutorAdded(SparkListenerExecutorAdded) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onExecutorAdded(SparkListenerExecutorAdded) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver registers a new executor.
onExecutorAdded(SparkListenerExecutorAdded) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onExecutorBlacklisted(SparkListenerExecutorBlacklisted) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onExecutorBlacklisted(SparkListenerExecutorBlacklisted) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver blacklists an executor for a Spark application.
onExecutorBlacklisted(SparkListenerExecutorBlacklisted) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onExecutorBlacklistedForStage(SparkListenerExecutorBlacklistedForStage) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onExecutorBlacklistedForStage(SparkListenerExecutorBlacklistedForStage) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver blacklists an executor for a stage.
onExecutorBlacklistedForStage(SparkListenerExecutorBlacklistedForStage) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onExecutorMetricsUpdate(SparkListenerExecutorMetricsUpdate) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onExecutorMetricsUpdate(SparkListenerExecutorMetricsUpdate) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver receives task metrics from an executor in a heartbeat.
onExecutorMetricsUpdate(SparkListenerExecutorMetricsUpdate) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onExecutorRemoved(SparkListenerExecutorRemoved) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onExecutorRemoved(SparkListenerExecutorRemoved) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver removes an executor.
onExecutorRemoved(SparkListenerExecutorRemoved) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onExecutorUnblacklisted(SparkListenerExecutorUnblacklisted) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onExecutorUnblacklisted(SparkListenerExecutorUnblacklisted) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver re-enables a previously blacklisted executor.
onExecutorUnblacklisted(SparkListenerExecutorUnblacklisted) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onFail(Function1<Exception, BoxedUnit>) - 类 中的方法org.apache.spark.partial.PartialResult
Set a handler to be called if this PartialResult's job fails.
onFailure(Throwable) - 接口 中的方法org.apache.spark.rpc.netty.OutboxMessage
 
onFailure(String, QueryExecution, Throwable) - 接口 中的方法org.apache.spark.sql.util.QueryExecutionListener
A callback function that will be called when a query execution failed.
onGenerateBlock(StreamBlockId) - 接口 中的方法org.apache.spark.streaming.receiver.BlockGeneratorListener
Called when a new block of data is generated by the block generator.
OnHeapExecutionMemory - org.apache.spark.metrics中的类
 
OnHeapExecutionMemory() - 类 的构造器org.apache.spark.metrics.OnHeapExecutionMemory
 
onHeapMemoryRemaining() - 类 中的方法org.apache.spark.status.api.v1.RDDDataDistribution
 
onHeapMemoryUsed() - 类 中的方法org.apache.spark.status.api.v1.RDDDataDistribution
 
OnHeapStorageMemory - org.apache.spark.metrics中的类
 
OnHeapStorageMemory() - 类 的构造器org.apache.spark.metrics.OnHeapStorageMemory
 
OnHeapUnifiedMemory - org.apache.spark.metrics中的类
 
OnHeapUnifiedMemory() - 类 的构造器org.apache.spark.metrics.OnHeapUnifiedMemory
 
onHeapUsed() - 类 中的方法org.apache.spark.status.LiveRDDDistribution
 
onJobEnd(SparkListenerJobEnd) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onJobEnd(SparkListenerJobEnd) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when a job ends
onJobEnd(SparkListenerJobEnd) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onJobStart(SparkListenerJobStart) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onJobStart(SparkListenerJobStart) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when a job starts
onJobStart(SparkListenerJobStart) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
OnlineLDAOptimizer - org.apache.spark.mllib.clustering中的类
:: DeveloperApi :: An online optimizer for LDA.
OnlineLDAOptimizer() - 类 的构造器org.apache.spark.mllib.clustering.OnlineLDAOptimizer
 
onNetworkError(Throwable, RpcAddress) - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
Invoked when some network error happens in the connection between the current node and remoteAddress.
onNodeBlacklisted(SparkListenerNodeBlacklisted) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onNodeBlacklisted(SparkListenerNodeBlacklisted) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver blacklists a node for a Spark application.
onNodeBlacklisted(SparkListenerNodeBlacklisted) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onNodeBlacklistedForStage(SparkListenerNodeBlacklistedForStage) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onNodeBlacklistedForStage(SparkListenerNodeBlacklistedForStage) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver blacklists a node for a stage.
onNodeBlacklistedForStage(SparkListenerNodeBlacklistedForStage) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onNodeUnblacklisted(SparkListenerNodeUnblacklisted) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onNodeUnblacklisted(SparkListenerNodeUnblacklisted) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when the driver re-enables a previously blacklisted node.
onNodeUnblacklisted(SparkListenerNodeUnblacklisted) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onOtherEvent(SparkListenerEvent) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onOtherEvent(SparkListenerEvent) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when other events like SQL-specific events are posted.
onOtherEvent(SparkListenerEvent) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onOutputOperationCompleted(JavaStreamingListenerOutputOperationCompleted) - 接口 中的方法org.apache.spark.streaming.api.java.PythonStreamingListener
Called when processing of a job of a batch has completed.
onOutputOperationCompleted(StreamingListenerOutputOperationCompleted) - 接口 中的方法org.apache.spark.streaming.scheduler.StreamingListener
Called when processing of a job of a batch has completed.
onOutputOperationStarted(JavaStreamingListenerOutputOperationStarted) - 接口 中的方法org.apache.spark.streaming.api.java.PythonStreamingListener
Called when processing of a job of a batch has started.
onOutputOperationStarted(StreamingListenerOutputOperationStarted) - 接口 中的方法org.apache.spark.streaming.scheduler.StreamingListener
Called when processing of a job of a batch has started.
onPushBlock(StreamBlockId, ArrayBuffer<?>) - 接口 中的方法org.apache.spark.streaming.receiver.BlockGeneratorListener
Called when a new block is ready to be pushed.
onQueryProgress(StreamingQueryListener.QueryProgressEvent) - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener
Called when there is some status update (ingestion rate updated, etc.)
onQueryStarted(StreamingQueryListener.QueryStartedEvent) - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener
Called when a query is started.
onQueryTerminated(StreamingQueryListener.QueryTerminatedEvent) - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener
Called when a query is stopped, with or without error.
onReceiverError(JavaStreamingListenerReceiverError) - 接口 中的方法org.apache.spark.streaming.api.java.PythonStreamingListener
Called when a receiver has reported an error
onReceiverError(StreamingListenerReceiverError) - 接口 中的方法org.apache.spark.streaming.scheduler.StreamingListener
Called when a receiver has reported an error
onReceiverStarted(JavaStreamingListenerReceiverStarted) - 接口 中的方法org.apache.spark.streaming.api.java.PythonStreamingListener
Called when a receiver has been started
onReceiverStarted(StreamingListenerReceiverStarted) - 接口 中的方法org.apache.spark.streaming.scheduler.StreamingListener
Called when a receiver has been started
onReceiverStopped(JavaStreamingListenerReceiverStopped) - 接口 中的方法org.apache.spark.streaming.api.java.PythonStreamingListener
Called when a receiver has been stopped
onReceiverStopped(StreamingListenerReceiverStopped) - 接口 中的方法org.apache.spark.streaming.scheduler.StreamingListener
Called when a receiver has been stopped
onSpeculativeTaskSubmitted(SparkListenerSpeculativeTaskSubmitted) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onSpeculativeTaskSubmitted(SparkListenerSpeculativeTaskSubmitted) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when a speculative task is submitted
onSpeculativeTaskSubmitted(SparkListenerSpeculativeTaskSubmitted) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onStageCompleted(SparkListenerStageCompleted) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onStageCompleted(SparkListenerStageCompleted) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when a stage completes successfully or fails, with information on the completed stage.
onStageCompleted(SparkListenerStageCompleted) - 类 中的方法org.apache.spark.scheduler.StatsReportListener
 
onStageCompleted(SparkListenerStageCompleted) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onStageCompleted(SparkListenerStageCompleted) - 类 中的方法org.apache.spark.SpillListener
 
onStageExecutorMetrics(SparkListenerStageExecutorMetrics) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onStageExecutorMetrics(SparkListenerStageExecutorMetrics) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called with the peak memory metrics for a given (executor, stage) combination.
onStageExecutorMetrics(SparkListenerStageExecutorMetrics) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onStageSubmitted(SparkListenerStageSubmitted) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onStageSubmitted(SparkListenerStageSubmitted) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when a stage is submitted
onStageSubmitted(SparkListenerStageSubmitted) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
OnStart - org.apache.spark.rpc.netty中的类
 
OnStart() - 类 的构造器org.apache.spark.rpc.netty.OnStart
 
onStart() - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
Invoked before RpcEndpoint starts to handle any message.
onStart() - 类 中的方法org.apache.spark.streaming.receiver.Receiver
This method is called by the system when the receiver is started.
OnStop - org.apache.spark.rpc.netty中的类
 
OnStop() - 类 的构造器org.apache.spark.rpc.netty.OnStop
 
onStop() - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
Invoked when RpcEndpoint is stopping.
onStop() - 类 中的方法org.apache.spark.streaming.receiver.Receiver
This method is called by the system when the receiver is stopped.
onStreamingStarted(JavaStreamingListenerStreamingStarted) - 接口 中的方法org.apache.spark.streaming.api.java.PythonStreamingListener
Called when the streaming has been started
onStreamingStarted(StreamingListenerStreamingStarted) - 接口 中的方法org.apache.spark.streaming.scheduler.StreamingListener
Called when the streaming has been started
onSuccess(String, QueryExecution, long) - 接口 中的方法org.apache.spark.sql.util.QueryExecutionListener
A callback function that will be called when a query executed successfully.
onTaskCommit(FileCommitProtocol.TaskCommitMessage) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Called on the driver after a task commits.
onTaskCompletion(TaskContext) - 类 中的方法org.apache.spark.storage.ShuffleFetchCompletionListener
 
onTaskCompletion(TaskContext) - 接口 中的方法org.apache.spark.util.TaskCompletionListener
 
onTaskEnd(SparkListenerTaskEnd) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onTaskEnd(SparkListenerTaskEnd) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when a task ends
onTaskEnd(SparkListenerTaskEnd) - 类 中的方法org.apache.spark.scheduler.StatsReportListener
 
onTaskEnd(SparkListenerTaskEnd) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onTaskEnd(SparkListenerTaskEnd) - 类 中的方法org.apache.spark.SpillListener
 
onTaskFailure(TaskContext, Throwable) - 接口 中的方法org.apache.spark.util.TaskFailureListener
 
onTaskGettingResult(SparkListenerTaskGettingResult) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onTaskGettingResult(SparkListenerTaskGettingResult) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when a task begins remotely fetching its result (will not be called for tasks that do not need to fetch the result remotely).
onTaskGettingResult(SparkListenerTaskGettingResult) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onTaskStart(SparkListenerTaskStart) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onTaskStart(SparkListenerTaskStart) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when a task starts
onTaskStart(SparkListenerTaskStart) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
onUnpersistRDD(SparkListenerUnpersistRDD) - 类 中的方法org.apache.spark.scheduler.SparkListener
 
onUnpersistRDD(SparkListenerUnpersistRDD) - 接口 中的方法org.apache.spark.scheduler.SparkListenerInterface
Called when an RDD is manually unpersisted by the application
onUnpersistRDD(SparkListenerUnpersistRDD) - 类 中的方法org.apache.spark.SparkFirehoseListener
 
OOM() - 类 中的静态方法org.apache.spark.util.SparkExitCode
The default uncaught exception handler was reached, and the uncaught exception was an
open() - 类 中的方法org.apache.spark.input.PortableDataStream
Create a new DataInputStream from the split and context.
open(long, long) - 类 中的方法org.apache.spark.sql.ForeachWriter
Called when starting to process one partition of new data in the executor.
open(File, M, ClassTag<M>) - 类 中的静态方法org.apache.spark.status.KVUtils
Open or create a LevelDB store.
openChannelWrapper() - 接口 中的方法org.apache.spark.shuffle.api.ShufflePartitionWriter
Opens and returns a WritableByteChannelWrapper for transferring bytes from input byte channels to the underlying shuffle data store.
openStream() - 接口 中的方法org.apache.spark.shuffle.api.ShufflePartitionWriter
Open and return an OutputStream that can write bytes to the underlying data store.
ops() - 类 中的方法org.apache.spark.graphx.Graph
The associated GraphOps object.
opt(Function0<Parsers.Parser<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
optimize(RDD<Tuple2<Object, Vector>>, Vector) - 类 中的方法org.apache.spark.mllib.optimization.GradientDescent
:: DeveloperApi :: Runs gradient descent on the given training data.
optimize(RDD<Tuple2<Object, Vector>>, Vector) - 类 中的方法org.apache.spark.mllib.optimization.LBFGS
 
optimize(RDD<Tuple2<Object, Vector>>, Vector) - 接口 中的方法org.apache.spark.mllib.optimization.Optimizer
Solve the provided convex optimization problem.
OptimizedCreateHiveTableAsSelectCommand - org.apache.spark.sql.hive.execution中的类
Create table and insert the query result into it.
OptimizedCreateHiveTableAsSelectCommand(CatalogTable, LogicalPlan, Seq<String>, SaveMode) - 类 的构造器org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 
optimizeDocConcentration() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
optimizeDocConcentration() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
optimizeDocConcentration() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
For Online optimizer only (currently): optimizer = "online".
optimizer() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
optimizer() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
optimizer() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
Optimizer or inference algorithm used to estimate the LDA model.
optimizer() - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
 
optimizer() - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionWithSGD
 
optimizer() - 类 中的方法org.apache.spark.mllib.classification.SVMWithSGD
 
Optimizer - org.apache.spark.mllib.optimization中的接口
:: DeveloperApi :: Trait for optimization problem solvers.
optimizer() - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
The optimizer to solve the problem.
optimizer() - 类 中的方法org.apache.spark.mllib.regression.LassoWithSGD
 
optimizer() - 类 中的方法org.apache.spark.mllib.regression.LinearRegressionWithSGD
 
optimizer() - 类 中的方法org.apache.spark.mllib.regression.RidgeRegressionWithSGD
 
option(String, String) - 类 中的方法org.apache.spark.ml.util.MLWriter
Adds an option to the underlying MLWriter.
option(String, String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Adds an input option for the underlying data source.
option(String, boolean) - 类 中的方法org.apache.spark.sql.DataFrameReader
Adds an input option for the underlying data source.
option(String, long) - 类 中的方法org.apache.spark.sql.DataFrameReader
Adds an input option for the underlying data source.
option(String, double) - 类 中的方法org.apache.spark.sql.DataFrameReader
Adds an input option for the underlying data source.
option(String, String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Adds an output option for the underlying data source.
option(String, boolean) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Adds an output option for the underlying data source.
option(String, long) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Adds an output option for the underlying data source.
option(String, double) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Adds an output option for the underlying data source.
option(String, String) - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
option(String, String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Adds an input option for the underlying data source.
option(String, boolean) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Adds an input option for the underlying data source.
option(String, long) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Adds an input option for the underlying data source.
option(String, double) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Adds an input option for the underlying data source.
option(String, String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Adds an output option for the underlying data source.
option(String, boolean) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Adds an output option for the underlying data source.
option(String, long) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Adds an output option for the underlying data source.
option(String, double) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Adds an output option for the underlying data source.
option(String, String) - 接口 中的方法org.apache.spark.sql.WriteConfigMethods
Add a write option.
option(String, boolean) - 接口 中的方法org.apache.spark.sql.WriteConfigMethods
Add a boolean output option.
option(String, long) - 接口 中的方法org.apache.spark.sql.WriteConfigMethods
Add a long output option.
option(String, double) - 接口 中的方法org.apache.spark.sql.WriteConfigMethods
Add a double output option.
Optional<T> - org.apache.spark.api.java中的类
Like java.util.Optional in Java 8, scala.Option in Scala, and com.google.common.base.Optional in Google Guava, this class represents a value of a given type that may or may not exist.
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
(Scala-specific) Adds input options for the underlying data source.
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Adds input options for the underlying data source.
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.DataFrameWriter
(Scala-specific) Adds output options for the underlying data source.
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Adds output options for the underlying data source.
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
(Scala-specific) Adds input options for the underlying data source.
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
(Java-specific) Adds input options for the underlying data source.
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
(Scala-specific) Adds output options for the underlying data source.
options(Map<String, String>) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Adds output options for the underlying data source.
options(Map<String, String>) - 接口 中的方法org.apache.spark.sql.WriteConfigMethods
Add write options from a Scala Map.
options(Map<String, String>) - 接口 中的方法org.apache.spark.sql.WriteConfigMethods
Add write options from a Java Map.
optionToOptional(Option<T>) - 类 中的静态方法org.apache.spark.api.java.JavaUtils
 
or(T) - 类 中的方法org.apache.spark.api.java.Optional
 
or(Column) - 类 中的方法org.apache.spark.sql.Column
Boolean OR.
Or - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff at least one of left or right evaluates to true.
Or(Filter, Filter) - 类 的构造器org.apache.spark.sql.sources.Or
 
OracleDialect - org.apache.spark.sql.jdbc中的类
 
OracleDialect() - 类 的构造器org.apache.spark.sql.jdbc.OracleDialect
 
orc(String...) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads ORC files and returns the result as a DataFrame.
orc(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads an ORC file and returns the result as a DataFrame.
orc(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads ORC files and returns the result as a DataFrame.
orc(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame in ORC format at the specified path.
orc(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Loads a ORC file stream, returning the result as a DataFrame.
OrcFileFormat - org.apache.spark.sql.hive.orc中的类
FileFormat for reading ORC files.
OrcFileFormat() - 类 的构造器org.apache.spark.sql.hive.orc.OrcFileFormat
 
OrcFileOperator - org.apache.spark.sql.hive.orc中的类
 
OrcFileOperator() - 类 的构造器org.apache.spark.sql.hive.orc.OrcFileOperator
 
OrcFilters - org.apache.spark.sql.hive.orc中的类
Helper object for building ORC SearchArguments, which are used for ORC predicate push-down.
OrcFilters() - 类 的构造器org.apache.spark.sql.hive.orc.OrcFilters
 
orderBy(String, String...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset sorted by the given expressions.
orderBy(Column...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset sorted by the given expressions.
orderBy(String, Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset sorted by the given expressions.
orderBy(Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset sorted by the given expressions.
orderBy(String, String...) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the ordering defined.
orderBy(Column...) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the ordering defined.
orderBy(String, Seq<String>) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the ordering defined.
orderBy(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the ordering defined.
orderBy(String, String...) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the ordering columns in a WindowSpec.
orderBy(Column...) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the ordering columns in a WindowSpec.
orderBy(String, Seq<String>) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the ordering columns in a WindowSpec.
orderBy(Seq<Column>) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the ordering columns in a WindowSpec.
OrderedRDDFunctions<K,V,P extends scala.Product2<K,V>> - org.apache.spark.rdd中的类
Extra functions available on RDDs of (key, value) pairs where the key is sortable through an implicit conversion.
OrderedRDDFunctions(RDD<P>, Ordering<K>, ClassTag<K>, ClassTag<V>, ClassTag<P>) - 类 的构造器org.apache.spark.rdd.OrderedRDDFunctions
 
ordering() - 类 中的静态方法org.apache.spark.streaming.Time
 
ORDINAL() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
orElse(T) - 类 中的方法org.apache.spark.api.java.Optional
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.graphx.GraphLoader
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.graphx.lib.PageRank
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.graphx.Pregel
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.internal.io.FileCommitProtocol
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriter
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.kafka010.KafkaRedactionUtil
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenSparkConf
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenUtil
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mapred.SparkHadoopMapRedUtil
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.metrics.GarbageCollectionMetrics
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.ml.feature.QuantileDiscretizer
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.ml.r.RWrapperUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.ml.recommendation.ALS
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.ml.tree.impl.RandomForest
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.clustering.LocalKMeans
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.clustering.PowerIterationClustering
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.fpm.PrefixSpan
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.optimization.GradientDescent
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.optimization.LBFGS
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.stat.correlation.PearsonCorrelation
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.stat.test.ChiSqTest
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.stat.test.StudentTTest
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.stat.test.WelchTTest
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.tree.GradientBoostedTrees
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.tree.model.DecisionTreeModel
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.tree.RandomForest
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.util.DataValidators
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.rdd.HadoopRDD
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.serializer.JavaIterableWrapperSerializer
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.serializer.SerializationDebugger
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.SparkConf
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.SparkContext
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.SparkEnv
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.dynamicpruning.CleanupDynamicPruningFilters
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.dynamicpruning.PartitionPruning
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.hive.HiveAnalysis
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.hive.HiveStrategies.HiveTableScans
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.hive.HiveStrategies.Scripts
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileOperator
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFilters
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.SparkSession
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.sql.types.UDTRegistration
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.status.KVUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.storage.StorageUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.streaming.CheckpointReader
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.streaming.StreamingContext
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.streaming.util.RawTextSender
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.ui.UIUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.util.AccumulatorContext
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.util.ClosureCleaner
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.util.random.StratifiedSamplingUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.util.SignalUtils
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.util.SizeEstimator
 
org$apache$spark$internal$Logging$$log_() - 类 中的静态方法org.apache.spark.util.Utils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.graphx.GraphLoader
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.graphx.lib.PageRank
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.graphx.Pregel
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.internal.io.FileCommitProtocol
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriter
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.kafka010.KafkaRedactionUtil
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenSparkConf
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenUtil
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mapred.SparkHadoopMapRedUtil
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.metrics.GarbageCollectionMetrics
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.ml.feature.QuantileDiscretizer
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.ml.r.RWrapperUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.ml.recommendation.ALS
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.ml.tree.impl.RandomForest
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.clustering.LocalKMeans
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.clustering.PowerIterationClustering
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.fpm.PrefixSpan
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.optimization.GradientDescent
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.optimization.LBFGS
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.PearsonCorrelation
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.stat.test.ChiSqTest
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.stat.test.StudentTTest
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.stat.test.WelchTTest
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.tree.GradientBoostedTrees
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.tree.model.DecisionTreeModel
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.tree.RandomForest
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.util.DataValidators
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.rdd.HadoopRDD
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.serializer.JavaIterableWrapperSerializer
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.serializer.SerializationDebugger
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.SparkConf
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.SparkContext
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.SparkEnv
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.dynamicpruning.CleanupDynamicPruningFilters
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.dynamicpruning.PartitionPruning
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.hive.HiveAnalysis
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.hive.HiveStrategies.HiveTableScans
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.hive.HiveStrategies.Scripts
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileOperator
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFilters
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.SparkSession
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.sql.types.UDTRegistration
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.status.KVUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.storage.StorageUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.streaming.CheckpointReader
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.streaming.StreamingContext
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.streaming.util.RawTextSender
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.util.AccumulatorContext
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.util.ClosureCleaner
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.util.random.StratifiedSamplingUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.util.SignalUtils
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.util.SizeEstimator
 
org$apache$spark$internal$Logging$$log__$eq(Logger) - 类 中的静态方法org.apache.spark.util.Utils
 
org$apache$spark$ml$util$BaseReadWrite$$optionSparkSession() - 类 中的静态方法org.apache.spark.ml.r.RWrappers
 
org$apache$spark$ml$util$BaseReadWrite$$optionSparkSession_$eq(Option<SparkSession>) - 类 中的静态方法org.apache.spark.ml.r.RWrappers
 
org.apache.spark - 程序包 org.apache.spark
Core Spark classes in Scala.
org.apache.spark.api.java - 程序包 org.apache.spark.api.java
Spark Java programming APIs.
org.apache.spark.api.java.function - 程序包 org.apache.spark.api.java.function
Set of interfaces to represent functions in Spark's Java API.
org.apache.spark.api.r - 程序包 org.apache.spark.api.r
 
org.apache.spark.broadcast - 程序包 org.apache.spark.broadcast
Spark's broadcast variables, used to broadcast immutable datasets to all nodes.
org.apache.spark.graphx - 程序包 org.apache.spark.graphx
ALPHA COMPONENT GraphX is a graph processing framework built on top of Spark.
org.apache.spark.graphx.impl - 程序包 org.apache.spark.graphx.impl
 
org.apache.spark.graphx.lib - 程序包 org.apache.spark.graphx.lib
Various analytics functions for graphs.
org.apache.spark.graphx.util - 程序包 org.apache.spark.graphx.util
Collections of utilities used by graphx.
org.apache.spark.input - 程序包 org.apache.spark.input
 
org.apache.spark.internal - 程序包 org.apache.spark.internal
 
org.apache.spark.internal.config - 程序包 org.apache.spark.internal.config
 
org.apache.spark.internal.io - 程序包 org.apache.spark.internal.io
 
org.apache.spark.io - 程序包 org.apache.spark.io
IO codecs used for compression.
org.apache.spark.kafka010 - 程序包 org.apache.spark.kafka010
 
org.apache.spark.launcher - 程序包 org.apache.spark.launcher
Library for launching Spark applications programmatically.
org.apache.spark.mapred - 程序包 org.apache.spark.mapred
 
org.apache.spark.metrics - 程序包 org.apache.spark.metrics
 
org.apache.spark.metrics.sink - 程序包 org.apache.spark.metrics.sink
 
org.apache.spark.metrics.source - 程序包 org.apache.spark.metrics.source
 
org.apache.spark.ml - 程序包 org.apache.spark.ml
DataFrame-based machine learning APIs to let users quickly assemble and configure practical machine learning pipelines.
org.apache.spark.ml.ann - 程序包 org.apache.spark.ml.ann
 
org.apache.spark.ml.attribute - 程序包 org.apache.spark.ml.attribute
ML attributes The ML pipeline API uses Datasets as ML datasets.
org.apache.spark.ml.classification - 程序包 org.apache.spark.ml.classification
 
org.apache.spark.ml.clustering - 程序包 org.apache.spark.ml.clustering
 
org.apache.spark.ml.evaluation - 程序包 org.apache.spark.ml.evaluation
 
org.apache.spark.ml.feature - 程序包 org.apache.spark.ml.feature
Feature transformers The `ml.feature` package provides common feature transformers that help convert raw data or features into more suitable forms for model fitting.
org.apache.spark.ml.fpm - 程序包 org.apache.spark.ml.fpm
 
org.apache.spark.ml.image - 程序包 org.apache.spark.ml.image
 
org.apache.spark.ml.impl - 程序包 org.apache.spark.ml.impl
 
org.apache.spark.ml.linalg - 程序包 org.apache.spark.ml.linalg
 
org.apache.spark.ml.optim - 程序包 org.apache.spark.ml.optim
 
org.apache.spark.ml.optim.aggregator - 程序包 org.apache.spark.ml.optim.aggregator
 
org.apache.spark.ml.optim.loss - 程序包 org.apache.spark.ml.optim.loss
 
org.apache.spark.ml.param - 程序包 org.apache.spark.ml.param
 
org.apache.spark.ml.param.shared - 程序包 org.apache.spark.ml.param.shared
 
org.apache.spark.ml.r - 程序包 org.apache.spark.ml.r
 
org.apache.spark.ml.recommendation - 程序包 org.apache.spark.ml.recommendation
 
org.apache.spark.ml.regression - 程序包 org.apache.spark.ml.regression
 
org.apache.spark.ml.source.image - 程序包 org.apache.spark.ml.source.image
 
org.apache.spark.ml.source.libsvm - 程序包 org.apache.spark.ml.source.libsvm
 
org.apache.spark.ml.stat - 程序包 org.apache.spark.ml.stat
 
org.apache.spark.ml.stat.distribution - 程序包 org.apache.spark.ml.stat.distribution
 
org.apache.spark.ml.tree - 程序包 org.apache.spark.ml.tree
 
org.apache.spark.ml.tree.impl - 程序包 org.apache.spark.ml.tree.impl
 
org.apache.spark.ml.tuning - 程序包 org.apache.spark.ml.tuning
 
org.apache.spark.ml.util - 程序包 org.apache.spark.ml.util
 
org.apache.spark.mllib - 程序包 org.apache.spark.mllib
RDD-based machine learning APIs (in maintenance mode).
org.apache.spark.mllib.classification - 程序包 org.apache.spark.mllib.classification
 
org.apache.spark.mllib.classification.impl - 程序包 org.apache.spark.mllib.classification.impl
 
org.apache.spark.mllib.clustering - 程序包 org.apache.spark.mllib.clustering
 
org.apache.spark.mllib.evaluation - 程序包 org.apache.spark.mllib.evaluation
 
org.apache.spark.mllib.evaluation.binary - 程序包 org.apache.spark.mllib.evaluation.binary
 
org.apache.spark.mllib.feature - 程序包 org.apache.spark.mllib.feature
 
org.apache.spark.mllib.fpm - 程序包 org.apache.spark.mllib.fpm
 
org.apache.spark.mllib.linalg - 程序包 org.apache.spark.mllib.linalg
 
org.apache.spark.mllib.linalg.distributed - 程序包 org.apache.spark.mllib.linalg.distributed
 
org.apache.spark.mllib.optimization - 程序包 org.apache.spark.mllib.optimization
 
org.apache.spark.mllib.pmml - 程序包 org.apache.spark.mllib.pmml
 
org.apache.spark.mllib.pmml.export - 程序包 org.apache.spark.mllib.pmml.export
 
org.apache.spark.mllib.random - 程序包 org.apache.spark.mllib.random
 
org.apache.spark.mllib.rdd - 程序包 org.apache.spark.mllib.rdd
 
org.apache.spark.mllib.recommendation - 程序包 org.apache.spark.mllib.recommendation
 
org.apache.spark.mllib.regression - 程序包 org.apache.spark.mllib.regression
 
org.apache.spark.mllib.regression.impl - 程序包 org.apache.spark.mllib.regression.impl
 
org.apache.spark.mllib.stat - 程序包 org.apache.spark.mllib.stat
 
org.apache.spark.mllib.stat.correlation - 程序包 org.apache.spark.mllib.stat.correlation
 
org.apache.spark.mllib.stat.distribution - 程序包 org.apache.spark.mllib.stat.distribution
 
org.apache.spark.mllib.stat.test - 程序包 org.apache.spark.mllib.stat.test
 
org.apache.spark.mllib.tree - 程序包 org.apache.spark.mllib.tree
 
org.apache.spark.mllib.tree.configuration - 程序包 org.apache.spark.mllib.tree.configuration
 
org.apache.spark.mllib.tree.impurity - 程序包 org.apache.spark.mllib.tree.impurity
 
org.apache.spark.mllib.tree.loss - 程序包 org.apache.spark.mllib.tree.loss
 
org.apache.spark.mllib.tree.model - 程序包 org.apache.spark.mllib.tree.model
 
org.apache.spark.mllib.util - 程序包 org.apache.spark.mllib.util
 
org.apache.spark.partial - 程序包 org.apache.spark.partial
 
org.apache.spark.rdd - 程序包 org.apache.spark.rdd
Provides implementation's of various RDDs.
org.apache.spark.resource - 程序包 org.apache.spark.resource
 
org.apache.spark.rpc - 程序包 org.apache.spark.rpc
 
org.apache.spark.rpc.netty - 程序包 org.apache.spark.rpc.netty
 
org.apache.spark.scheduler - 程序包 org.apache.spark.scheduler
Spark's DAG scheduler.
org.apache.spark.scheduler.cluster - 程序包 org.apache.spark.scheduler.cluster
 
org.apache.spark.scheduler.local - 程序包 org.apache.spark.scheduler.local
 
org.apache.spark.security - 程序包 org.apache.spark.security
 
org.apache.spark.serializer - 程序包 org.apache.spark.serializer
Pluggable serializers for RDD and shuffle data.
org.apache.spark.shuffle.api - 程序包 org.apache.spark.shuffle.api
 
org.apache.spark.sql - 程序包 org.apache.spark.sql
 
org.apache.spark.sql.api.java - 程序包 org.apache.spark.sql.api.java
Allows the execution of relational queries, including those expressed in SQL using Spark.
org.apache.spark.sql.api.r - 程序包 org.apache.spark.sql.api.r
 
org.apache.spark.sql.catalog - 程序包 org.apache.spark.sql.catalog
 
org.apache.spark.sql.connector.catalog - 程序包 org.apache.spark.sql.connector.catalog
 
org.apache.spark.sql.connector.expressions - 程序包 org.apache.spark.sql.connector.expressions
 
org.apache.spark.sql.connector.read - 程序包 org.apache.spark.sql.connector.read
 
org.apache.spark.sql.connector.read.partitioning - 程序包 org.apache.spark.sql.connector.read.partitioning
 
org.apache.spark.sql.connector.read.streaming - 程序包 org.apache.spark.sql.connector.read.streaming
 
org.apache.spark.sql.connector.write - 程序包 org.apache.spark.sql.connector.write
 
org.apache.spark.sql.connector.write.streaming - 程序包 org.apache.spark.sql.connector.write.streaming
 
org.apache.spark.sql.dynamicpruning - 程序包 org.apache.spark.sql.dynamicpruning
 
org.apache.spark.sql.expressions - 程序包 org.apache.spark.sql.expressions
 
org.apache.spark.sql.expressions.javalang - 程序包 org.apache.spark.sql.expressions.javalang
 
org.apache.spark.sql.expressions.scalalang - 程序包 org.apache.spark.sql.expressions.scalalang
 
org.apache.spark.sql.hive - 程序包 org.apache.spark.sql.hive
 
org.apache.spark.sql.hive.client - 程序包 org.apache.spark.sql.hive.client
 
org.apache.spark.sql.hive.execution - 程序包 org.apache.spark.sql.hive.execution
 
org.apache.spark.sql.hive.orc - 程序包 org.apache.spark.sql.hive.orc
 
org.apache.spark.sql.jdbc - 程序包 org.apache.spark.sql.jdbc
 
org.apache.spark.sql.sources - 程序包 org.apache.spark.sql.sources
 
org.apache.spark.sql.streaming - 程序包 org.apache.spark.sql.streaming
 
org.apache.spark.sql.types - 程序包 org.apache.spark.sql.types
 
org.apache.spark.sql.util - 程序包 org.apache.spark.sql.util
 
org.apache.spark.sql.vectorized - 程序包 org.apache.spark.sql.vectorized
 
org.apache.spark.status - 程序包 org.apache.spark.status
 
org.apache.spark.status.api.v1 - 程序包 org.apache.spark.status.api.v1
 
org.apache.spark.status.api.v1.streaming - 程序包 org.apache.spark.status.api.v1.streaming
 
org.apache.spark.storage - 程序包 org.apache.spark.storage
 
org.apache.spark.storage.memory - 程序包 org.apache.spark.storage.memory
 
org.apache.spark.streaming - 程序包 org.apache.spark.streaming
 
org.apache.spark.streaming.api.java - 程序包 org.apache.spark.streaming.api.java
Java APIs for spark streaming.
org.apache.spark.streaming.dstream - 程序包 org.apache.spark.streaming.dstream
Various implementations of DStreams.
org.apache.spark.streaming.kinesis - 程序包 org.apache.spark.streaming.kinesis
 
org.apache.spark.streaming.receiver - 程序包 org.apache.spark.streaming.receiver
 
org.apache.spark.streaming.scheduler - 程序包 org.apache.spark.streaming.scheduler
 
org.apache.spark.streaming.scheduler.rate - 程序包 org.apache.spark.streaming.scheduler.rate
 
org.apache.spark.streaming.ui - 程序包 org.apache.spark.streaming.ui
 
org.apache.spark.streaming.util - 程序包 org.apache.spark.streaming.util
 
org.apache.spark.ui - 程序包 org.apache.spark.ui
 
org.apache.spark.ui.jobs - 程序包 org.apache.spark.ui.jobs
 
org.apache.spark.ui.storage - 程序包 org.apache.spark.ui.storage
 
org.apache.spark.util - 程序包 org.apache.spark.util
Spark utilities.
org.apache.spark.util.logging - 程序包 org.apache.spark.util.logging
 
org.apache.spark.util.random - 程序包 org.apache.spark.util.random
Utilities for random number generation.
org.apache.spark.util.sketch - 程序包 org.apache.spark.util.sketch
 
original() - 接口 中的方法org.apache.spark.security.CryptoStreamUtils.BaseErrorHandler
The underlying stream that is being wrapped by the encrypted stream, so that it can be closed even if there's an error in the crypto layer.
originalMax() - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
originalMin() - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
orNull() - 类 中的方法org.apache.spark.api.java.Optional
 
other() - 类 中的方法org.apache.spark.scheduler.RuntimePercentage
 
otherVertexAttr(long) - 类 中的方法org.apache.spark.graphx.EdgeTriplet
Given one vertex in the edge return the other vertex.
otherVertexId(long) - 类 中的方法org.apache.spark.graphx.Edge
Given one vertex in the edge return the other vertex.
otherwise(Object) - 类 中的方法org.apache.spark.sql.Column
Evaluates a list of conditions and returns one of multiple possible result expressions.
Out() - 类 中的静态方法org.apache.spark.graphx.EdgeDirection
Edges originating from a vertex.
OutboxMessage - org.apache.spark.rpc.netty中的接口
 
outDegrees() - 类 中的方法org.apache.spark.graphx.GraphOps
 
outerJoinVertices(RDD<Tuple2<Object, U>>, Function3<Object, VD, Option<U>, VD2>, ClassTag<U>, ClassTag<VD2>, Predef.$eq$colon$eq<VD, VD2>) - 类 中的方法org.apache.spark.graphx.Graph
Joins the vertices with entries in the table RDD and merges the results using mapFunc.
outerJoinVertices(RDD<Tuple2<Object, U>>, Function3<Object, VD, Option<U>, VD2>, ClassTag<U>, ClassTag<VD2>, Predef.$eq$colon$eq<VD, VD2>) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
output() - 类 中的方法org.apache.spark.ml.TransformEnd
 
output() - 类 中的方法org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
OUTPUT() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
output$() - 类 的构造器org.apache.spark.InternalAccumulator.output$
 
OUTPUT_FORMAT() - 类 中的静态方法org.apache.spark.sql.hive.execution.HiveOptions
 
OUTPUT_METRICS_PREFIX() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
OUTPUT_RECORDS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
OUTPUT_SIZE() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
outputBytes() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
outputBytes() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.IDF
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.Imputer
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.Interaction
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.MaxAbsScaler
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.PCA
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
outputCol() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
outputCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasOutputCol
Param for output column name.
outputCol() - 类 中的方法org.apache.spark.ml.UnaryTransformer
 
outputCols() - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
outputCols() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
outputCols() - 类 中的方法org.apache.spark.ml.feature.Imputer
 
outputCols() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
outputCols() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
outputCols() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
outputCols() - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
outputCols() - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
outputCols() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
outputCols() - 接口 中的方法org.apache.spark.ml.param.shared.HasOutputCols
Param for output column names.
outputColumnNames() - 接口 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectBase
 
outputColumnNames() - 类 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
outputColumnNames() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
outputColumnNames() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
outputColumnNames() - 类 中的方法org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 
OutputCommitCoordinationMessage - org.apache.spark.scheduler中的接口
 
outputCommitCoordinator() - 类 中的方法org.apache.spark.SparkEnv
 
outputEncoder() - 类 中的方法org.apache.spark.ml.feature.StringIndexerAggregator
 
outputEncoder() - 类 中的方法org.apache.spark.sql.expressions.Aggregator
Specifies the Encoder for the final output value type.
outputFormat() - 类 中的方法org.apache.spark.sql.hive.execution.HiveOptions
 
OutputMetricDistributions - org.apache.spark.status.api.v1中的类
 
OutputMetrics - org.apache.spark.status.api.v1中的类
 
outputMetrics() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
outputMetrics() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
outputMode(OutputMode) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Specifies how data of a streaming DataFrame/Dataset is written to a streaming sink.
outputMode(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Specifies how data of a streaming DataFrame/Dataset is written to a streaming sink.
OutputMode - org.apache.spark.sql.streaming中的类
OutputMode describes what data will be written to a streaming sink when there is new data available in a streaming DataFrame/Dataset.
OutputMode() - 类 的构造器org.apache.spark.sql.streaming.OutputMode
 
OutputOperationInfo - org.apache.spark.status.api.v1.streaming中的类
 
OutputOperationInfo - org.apache.spark.streaming.scheduler中的类
:: DeveloperApi :: Class having information on output operations.
OutputOperationInfo(Time, int, String, String, Option<Object>, Option<Object>, Option<String>) - 类 的构造器org.apache.spark.streaming.scheduler.OutputOperationInfo
 
outputOperationInfo() - 类 中的方法org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
 
outputOperationInfo() - 类 中的方法org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
 
outputOperationInfos() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
 
outputOpId() - 类 中的方法org.apache.spark.status.api.v1.streaming.OutputOperationInfo
 
outputPartitioning() - 接口 中的方法org.apache.spark.sql.connector.read.SupportsReportPartitioning
Returns the output data partitioning that this reader guarantees.
outputPartitioning() - 类 中的方法org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
outputRecords() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
outputRecords() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
outputRowFormat() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
outputRowFormatMap() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
outputSerdeClass() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
outputSerdeProps() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
over(WindowSpec) - 类 中的方法org.apache.spark.sql.Column
Defines a windowing column.
over() - 类 中的方法org.apache.spark.sql.Column
Defines an empty analytic clause.
overallScore(Dataset<Row>, Column) - 类 中的静态方法org.apache.spark.ml.evaluation.CosineSilhouette
 
overallScore(Dataset<Row>, Column) - 类 中的静态方法org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
 
overlay(Column, Column, Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Overlay the specified portion of src with replace, starting from byte position pos of src and proceeding for len bytes.
overlay(Column, Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Overlay the specified portion of src with replace, starting from byte position pos of src.
overwrite() - 类 中的方法org.apache.spark.ml.util.MLWriter
Overwrites if the output path already exists.
overwrite(Filter[]) - 接口 中的方法org.apache.spark.sql.connector.write.SupportsOverwrite
Configures a write to replace data matching the filters with data committed in the write.
overwrite(Column) - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
Overwrite rows matching the given filter condition with the contents of the data frame in the output table.
overwrite() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
overwrite() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
overwriteDynamicPartitions() - 接口 中的方法org.apache.spark.sql.connector.write.SupportsDynamicOverwrite
Configures a write to dynamically replace partitions with data committed in the write.
overwritePartitions() - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
Overwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table.

P

p() - 类 中的方法org.apache.spark.ml.feature.Normalizer
Normalization in L^p^ space.
PagedTable<T> - org.apache.spark.ui中的接口
A paged table that will generate a HTML table for a specified page and also the page navigation.
pageLink(int) - 接口 中的方法org.apache.spark.ui.PagedTable
Return a link to jump to a page.
pageNavigation(int, int, int) - 接口 中的方法org.apache.spark.ui.PagedTable
Return a page navigation.
pageNumberFormField() - 接口 中的方法org.apache.spark.ui.PagedTable
 
pageRank(double, double) - 类 中的方法org.apache.spark.graphx.GraphOps
Run a dynamic version of PageRank returning a graph with vertex attributes containing the PageRank and edge attributes containing the normalized edge weight.
PageRank - org.apache.spark.graphx.lib中的类
PageRank algorithm implementation.
PageRank() - 类 的构造器org.apache.spark.graphx.lib.PageRank
 
pageSizeFormField() - 接口 中的方法org.apache.spark.ui.PagedTable
 
PairDStreamFunctions<K,V> - org.apache.spark.streaming.dstream中的类
Extra functions available on DStream of (key, value) pairs through an implicit conversion.
PairDStreamFunctions(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - 类 的构造器org.apache.spark.streaming.dstream.PairDStreamFunctions
 
PairFlatMapFunction<T,K,V> - org.apache.spark.api.java.function中的接口
A function that returns zero or more key-value pair records from each input record.
PairFunction<T,K,V> - org.apache.spark.api.java.function中的接口
A function that returns key-value pairs (Tuple2<K, V>), and can be used to construct PairRDDs.
PairRDDFunctions<K,V> - org.apache.spark.rdd中的类
Extra functions available on RDDs of (key, value) pairs through an implicit conversion.
PairRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - 类 的构造器org.apache.spark.rdd.PairRDDFunctions
 
PairwiseRRDD<T> - org.apache.spark.api.r中的类
Form an RDD[(Int, Array[Byte])] from key-value pairs returned from R.
PairwiseRRDD(RDD<T>, int, byte[], String, byte[], Object[], ClassTag<T>) - 类 的构造器org.apache.spark.api.r.PairwiseRRDD
 
parallelism() - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
parallelism() - 接口 中的方法org.apache.spark.ml.param.shared.HasParallelism
The number of threads to use when running parallel algorithms.
parallelism() - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
parallelism() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
parallelize(List<T>, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Distribute a local Scala collection to form an RDD.
parallelize(List<T>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Distribute a local Scala collection to form an RDD.
parallelize(Seq<T>, int, ClassTag<T>) - 类 中的方法org.apache.spark.SparkContext
Distribute a local Scala collection to form an RDD.
parallelizeDoubles(List<Double>, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Distribute a local Scala collection to form an RDD.
parallelizeDoubles(List<Double>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Distribute a local Scala collection to form an RDD.
parallelizePairs(List<Tuple2<K, V>>, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Distribute a local Scala collection to form an RDD.
parallelizePairs(List<Tuple2<K, V>>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Distribute a local Scala collection to form an RDD.
Param<T> - org.apache.spark.ml.param中的类
:: DeveloperApi :: A param with self-contained documentation and optionally default value.
Param(String, String, String, Function1<T, Object>) - 类 的构造器org.apache.spark.ml.param.Param
 
Param(Identifiable, String, String, Function1<T, Object>) - 类 的构造器org.apache.spark.ml.param.Param
 
Param(String, String, String) - 类 的构造器org.apache.spark.ml.param.Param
 
Param(Identifiable, String, String) - 类 的构造器org.apache.spark.ml.param.Param
 
param() - 类 中的方法org.apache.spark.ml.param.ParamPair
 
ParamGridBuilder - org.apache.spark.ml.tuning中的类
Builder for a param grid used in grid search-based model selection.
ParamGridBuilder() - 类 的构造器org.apache.spark.ml.tuning.ParamGridBuilder
 
ParamMap - org.apache.spark.ml.param中的类
A param to value map.
ParamMap() - 类 的构造器org.apache.spark.ml.param.ParamMap
Creates an empty param map.
paramMap() - 接口 中的方法org.apache.spark.ml.param.Params
Internal param map for user-supplied values.
ParamPair<T> - org.apache.spark.ml.param中的类
A param and its value.
ParamPair(Param<T>, T) - 类 的构造器org.apache.spark.ml.param.ParamPair
 
params() - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
params() - 类 中的方法org.apache.spark.ml.evaluation.Evaluator
 
params() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
params() - 类 中的方法org.apache.spark.ml.param.JavaParams
 
Params - org.apache.spark.ml.param中的接口
:: DeveloperApi :: Trait for components that take parameters.
params() - 接口 中的方法org.apache.spark.ml.param.Params
Returns all params sorted by their names.
params() - 类 中的方法org.apache.spark.ml.PipelineStage
 
ParamValidators - org.apache.spark.ml.param中的类
:: DeveloperApi :: Factory methods for common validation functions for Param.isValid.
ParamValidators() - 类 的构造器org.apache.spark.ml.param.ParamValidators
 
parent() - 类 中的方法org.apache.spark.ml.Model
The parent estimator that produced this model.
parent() - 类 中的方法org.apache.spark.ml.param.Param
 
parent() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
ParentClassLoader - org.apache.spark.util中的类
A class loader which makes some protected methods in ClassLoader accessible.
ParentClassLoader(ClassLoader) - 类 的构造器org.apache.spark.util.ParentClassLoader
 
parentIds() - 类 中的方法org.apache.spark.scheduler.StageInfo
 
parentIds() - 类 中的方法org.apache.spark.storage.RDDInfo
 
parentIndex(int) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Get the parent index of the given node, or 0 if it is the root.
parmap(Seq<I>, String, int, Function1<I, O>) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Transforms input collection by applying the given function to each element in parallel fashion.
parquet(String...) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads a Parquet file, returning the result as a DataFrame.
parquet(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads a Parquet file, returning the result as a DataFrame.
parquet(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads a Parquet file, returning the result as a DataFrame.
parquet(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame in Parquet format at the specified path.
parquet(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Loads a Parquet file stream, returning the result as a DataFrame.
parse(String) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
parse(String) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Parses a string resulted from Vector.toString into a Vector.
parse(String) - 类 中的静态方法org.apache.spark.mllib.regression.LabeledPoint
Parses a string resulted from LabeledPoint#toString into an LabeledPoint.
parse(String) - 类 中的静态方法org.apache.spark.mllib.util.NumericParser
Parses a string into a Double, an Array[Double], or a Seq[Any].
parseAll(Parsers.Parser<T>, Reader<Object>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
parseAll(Parsers.Parser<T>, Reader) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
parseAll(Parsers.Parser<T>, CharSequence) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
parseAllocatedFromJsonFile(String) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
parseAllResourceRequests(SparkConf, String) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
parseHostPort(String) - 类 中的静态方法org.apache.spark.util.Utils
 
parseIgnoreCase(Class<E>, String) - 类 中的静态方法org.apache.spark.util.EnumUtil
 
parseJson(String) - 类 中的静态方法org.apache.spark.resource.ResourceInformation
Parses a JSON string into a ResourceInformation instance.
parseJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.resource.ResourceInformation
 
Parser(Function1<Reader<Object>, Parsers.ParseResult<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
parseResourceRequest(SparkConf, ResourceID) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
parseResourceRequirements(SparkConf, String) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
parseStandaloneMasterUrls(String) - 类 中的静态方法org.apache.spark.util.Utils
Split the comma delimited string of master URLs into a list.
PartialResult<R> - org.apache.spark.partial中的类
 
PartialResult(R, boolean) - 类 的构造器org.apache.spark.partial.PartialResult
 
Partition - org.apache.spark中的接口
An identifier for a partition in an RDD.
partition() - 类 中的方法org.apache.spark.scheduler.AskPermissionToCommitOutput
 
partition() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
partition(String) - 类 中的方法org.apache.spark.status.LiveRDD
 
partitionBy(Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a copy of the RDD partitioned using the specified partitioner.
partitionBy(PartitionStrategy) - 类 中的方法org.apache.spark.graphx.Graph
Repartitions the edges in the graph according to partitionStrategy.
partitionBy(PartitionStrategy, int) - 类 中的方法org.apache.spark.graphx.Graph
Repartitions the edges in the graph according to partitionStrategy.
partitionBy(PartitionStrategy) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
partitionBy(PartitionStrategy, int) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
partitionBy(Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return a copy of the RDD partitioned using the specified partitioner.
partitionBy(String...) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Partitions the output by the given columns on the file system.
partitionBy(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Partitions the output by the given columns on the file system.
partitionBy(String, String...) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the partitioning defined.
partitionBy(Column...) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the partitioning defined.
partitionBy(String, Seq<String>) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the partitioning defined.
partitionBy(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the partitioning defined.
partitionBy(String, String...) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the partitioning columns in a WindowSpec.
partitionBy(Column...) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the partitioning columns in a WindowSpec.
partitionBy(String, Seq<String>) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the partitioning columns in a WindowSpec.
partitionBy(Seq<Column>) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the partitioning columns in a WindowSpec.
partitionBy(String...) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Partitions the output by the given columns on the file system.
partitionBy(Seq<String>) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Partitions the output by the given columns on the file system.
PartitionCoalescer - org.apache.spark.rdd中的接口
::DeveloperApi:: A PartitionCoalescer defines how to coalesce the partitions of a given RDD.
partitionedBy(Column, Seq<Column>) - 接口 中的方法org.apache.spark.sql.CreateTableWriter
Partition the output table created by create, createOrReplace, or replace using the given columns or transforms.
partitionedBy(Column, Column...) - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
partitionedBy(Column, Seq<Column>) - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
partitioner() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
The partitioner of this RDD.
partitioner() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
If partitionsRDD already has a partitioner, use it.
partitioner() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
Partitioner - org.apache.spark中的类
An object that defines how the elements in a key-value pair RDD are partitioned by key.
Partitioner() - 类 的构造器org.apache.spark.Partitioner
 
partitioner() - 类 中的方法org.apache.spark.rdd.CoGroupedRDD
 
partitioner() - 类 中的方法org.apache.spark.rdd.RDD
Optionally overridden by subclasses to specify how they are partitioned.
partitioner() - 类 中的方法org.apache.spark.rdd.ShuffledRDD
 
partitioner() - 类 中的方法org.apache.spark.ShuffleDependency
 
partitioner(Partitioner) - 类 中的方法org.apache.spark.streaming.StateSpec
Set the partitioner by which the state RDDs generated by mapWithState will be partitioned.
PartitionGroup - org.apache.spark.rdd中的类
::DeveloperApi:: A group of Partitions param: prefLoc preferred location for the partition group
PartitionGroup(Option<String>) - 类 的构造器org.apache.spark.rdd.PartitionGroup
 
partitionGroupOrdering() - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
Accessor for nested Scala object
partitionGroupOrdering$() - 类 的构造器org.apache.spark.rdd.DefaultPartitionCoalescer.partitionGroupOrdering$
 
partitionId() - 类 中的方法org.apache.spark.BarrierTaskContext
 
partitionID() - 类 中的方法org.apache.spark.TaskCommitDenied
 
partitionId() - 类 中的方法org.apache.spark.TaskContext
The ID of the RDD partition that is computed by this task.
partitioning() - 接口 中的方法org.apache.spark.sql.connector.catalog.Table
Returns the physical partitioning of this table.
Partitioning - org.apache.spark.sql.connector.read.partitioning中的接口
An interface to represent the output data partitioning for a data source, which is returned by SupportsReportPartitioning.outputPartitioning().
PartitionOffset - org.apache.spark.sql.connector.read.streaming中的接口
Used for per-partition offsets in continuous processing.
PartitionPruning - org.apache.spark.sql.dynamicpruning中的类
Dynamic partition pruning optimization is performed based on the type and selectivity of the join operation.
PartitionPruning() - 类 的构造器org.apache.spark.sql.dynamicpruning.PartitionPruning
 
PartitionPruningRDD<T> - org.apache.spark.rdd中的类
:: DeveloperApi :: An RDD used to prune RDD partitions/partitions so we can avoid launching tasks on all partitions.
PartitionPruningRDD(RDD<T>, Function1<Object, Object>, ClassTag<T>) - 类 的构造器org.apache.spark.rdd.PartitionPruningRDD
 
PartitionReader<T> - org.apache.spark.sql.connector.read中的接口
PartitionReaderFactory - org.apache.spark.sql.connector.read中的接口
A factory used to create PartitionReader instances.
partitions() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Set of partitions in this RDD.
partitions() - 类 中的方法org.apache.spark.rdd.PartitionGroup
 
partitions() - 类 中的方法org.apache.spark.rdd.RDD
Get the array of partitions of this RDD, taking into account whether the RDD is checkpointed or not.
partitions() - 类 中的方法org.apache.spark.status.api.v1.RDDStorageInfo
 
partitionsRDD() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
partitionsRDD() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
PartitionStrategy - org.apache.spark.graphx中的接口
Represents the way edges are assigned to edge partitions based on their source and destination vertex IDs.
PartitionStrategy.CanonicalRandomVertexCut$ - org.apache.spark.graphx中的类
Assigns edges to partitions by hashing the source and destination vertex IDs in a canonical direction, resulting in a random vertex cut that colocates all edges between two vertices, regardless of direction.
PartitionStrategy.EdgePartition1D$ - org.apache.spark.graphx中的类
Assigns edges to partitions using only the source vertex ID, colocating edges with the same source.
PartitionStrategy.EdgePartition2D$ - org.apache.spark.graphx中的类
Assigns edges to partitions using a 2D partitioning of the sparse edge adjacency matrix, guaranteeing a 2 * sqrt(numParts) bound on vertex replication.
PartitionStrategy.RandomVertexCut$ - org.apache.spark.graphx中的类
Assigns edges to partitions by hashing the source and destination vertex IDs, resulting in a random vertex cut that colocates all same-direction edges between two vertices.
PartitionTypeHelper(StructType) - 类 的构造器org.apache.spark.sql.connector.catalog.CatalogV2Implicits.PartitionTypeHelper
 
path() - 类 中的方法org.apache.spark.ml.LoadInstanceStart
 
path() - 类 中的方法org.apache.spark.ml.SaveInstanceEnd
 
path() - 类 中的方法org.apache.spark.ml.SaveInstanceStart
 
path() - 类 中的方法org.apache.spark.scheduler.InputFormatInfo
 
path() - 类 中的方法org.apache.spark.scheduler.SplitInfo
 
pattern() - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
Regex pattern used to match delimiters if gaps is true or tokens if gaps is false.
pc() - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
pc() - 类 中的方法org.apache.spark.mllib.feature.PCAModel
 
PCA - org.apache.spark.ml.feature中的类
PCA trains a model to project vectors to a lower dimensional space of the top PCA!.
PCA(String) - 类 的构造器org.apache.spark.ml.feature.PCA
 
PCA() - 类 的构造器org.apache.spark.ml.feature.PCA
 
PCA - org.apache.spark.mllib.feature中的类
A feature transformer that projects vectors to a low-dimensional space using PCA.
PCA(int) - 类 的构造器org.apache.spark.mllib.feature.PCA
 
PCAModel - org.apache.spark.ml.feature中的类
Model fitted by PCA.
PCAModel - org.apache.spark.mllib.feature中的类
Model fitted by PCA that can project vectors to a low-dimensional space using PCA.
PCAParams - org.apache.spark.ml.feature中的接口
Params for PCA and PCAModel.
PCAUtil - org.apache.spark.mllib.feature中的类
 
PCAUtil() - 类 的构造器org.apache.spark.mllib.feature.PCAUtil
 
pdf(Vector) - 类 中的方法org.apache.spark.ml.stat.distribution.MultivariateGaussian
Returns density of this multivariate Gaussian at given point, x
pdf(Vector) - 类 中的方法org.apache.spark.mllib.stat.distribution.MultivariateGaussian
Returns density of this multivariate Gaussian at given point, x
PEAK_EXECUTION_MEMORY() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
PEAK_EXECUTION_MEMORY() - 类 中的静态方法org.apache.spark.ui.jobs.TaskDetailsClassNames
 
PEAK_EXECUTION_MEMORY() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
PEAK_MEM() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
peakExecutionMemory() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
peakExecutionMemory() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
peakExecutionMemory() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
peakExecutorMetrics() - 类 中的方法org.apache.spark.status.LiveExecutor
 
peakMemoryMetrics() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
PEARSON() - 类 中的静态方法org.apache.spark.mllib.stat.test.ChiSqTest
 
PearsonCorrelation - org.apache.spark.mllib.stat.correlation中的类
Compute Pearson correlation for two RDDs of the type RDD[Double] or the correlation matrix for an RDD of the type RDD[Vector].
PearsonCorrelation() - 类 的构造器org.apache.spark.mllib.stat.correlation.PearsonCorrelation
 
percent_rank() - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the relative rank (i.e. percentile) of rows within a window partition.
percentile() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
percentile() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
percentile() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
Percentile of features that selector will select, ordered by statistics value descending.
percentile() - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
percentiles() - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
percentilesHeader() - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
persist(StorageLevel) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Set this RDD's storage level to persist its values across operations after the first time it is computed.
persist(StorageLevel) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Set this RDD's storage level to persist its values across operations after the first time it is computed.
persist(StorageLevel) - 类 中的方法org.apache.spark.api.java.JavaRDD
Set this RDD's storage level to persist its values across operations after the first time it is computed.
persist(StorageLevel) - 类 中的方法org.apache.spark.graphx.Graph
Caches the vertices and edges associated with this graph at the specified storage level, ignoring any target storage levels previously set.
persist(StorageLevel) - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
Persists the edge partitions at the specified storage level, ignoring any existing target storage level.
persist(StorageLevel) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
persist(StorageLevel) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
Persists the vertex partitions at the specified storage level, ignoring any existing target storage level.
persist(StorageLevel) - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Persists the underlying RDD with the specified storage level.
persist(StorageLevel) - 类 中的方法org.apache.spark.rdd.HadoopRDD
 
persist(StorageLevel) - 类 中的方法org.apache.spark.rdd.NewHadoopRDD
 
persist(StorageLevel) - 类 中的方法org.apache.spark.rdd.RDD
Set this RDD's storage level to persist its values across operations after the first time it is computed.
persist() - 类 中的方法org.apache.spark.rdd.RDD
Persist this RDD with the default storage level (MEMORY_ONLY).
persist() - 类 中的方法org.apache.spark.sql.Dataset
Persist this Dataset with the default storage level (MEMORY_AND_DISK).
persist(StorageLevel) - 类 中的方法org.apache.spark.sql.Dataset
Persist this Dataset with the given storage level.
persist() - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
persist(StorageLevel) - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
Persist the RDDs of this DStream with the given storage level
persist() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
persist(StorageLevel) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Persist the RDDs of this DStream with the given storage level
persist(StorageLevel) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Persist the RDDs of this DStream with the given storage level
persist() - 类 中的方法org.apache.spark.streaming.dstream.DStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
personalizedPageRank(long, double, double) - 类 中的方法org.apache.spark.graphx.GraphOps
Run personalized PageRank for a given vertex, such that all random walks are started relative to the source node.
phrase(Parsers.Parser<T>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
pi() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
pi() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel
 
pi() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
 
pi() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
 
pickBin(Partition, RDD<?>, double, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations) - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
Takes a parent RDD partition and decides which of the partition groups to put it in Takes locality into account, but also uses power of 2 choices to load balance It strikes a balance between the two using the balanceSlack variable
pickRandomVertex() - 类 中的方法org.apache.spark.graphx.GraphOps
Picks a random vertex from the graph and returns its ID.
pipe(String) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an RDD created by piping elements to a forked external process.
pipe(List<String>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an RDD created by piping elements to a forked external process.
pipe(List<String>, Map<String, String>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an RDD created by piping elements to a forked external process.
pipe(List<String>, Map<String, String>, boolean, int) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an RDD created by piping elements to a forked external process.
pipe(List<String>, Map<String, String>, boolean, int, String) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an RDD created by piping elements to a forked external process.
pipe(String) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD created by piping elements to a forked external process.
pipe(String, Map<String, String>) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD created by piping elements to a forked external process.
pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD created by piping elements to a forked external process.
Pipeline - org.apache.spark.ml中的类
A simple pipeline, which acts as an estimator.
Pipeline(String) - 类 的构造器org.apache.spark.ml.Pipeline
 
Pipeline() - 类 的构造器org.apache.spark.ml.Pipeline
 
Pipeline.SharedReadWrite$ - org.apache.spark.ml中的类
Methods for MLReader and MLWriter shared between Pipeline and PipelineModel
PipelineModel - org.apache.spark.ml中的类
Represents a fitted pipeline.
PipelineStage - org.apache.spark.ml中的类
:: DeveloperApi :: A stage in a pipeline, either an Estimator or a Transformer.
PipelineStage() - 类 的构造器org.apache.spark.ml.PipelineStage
 
pivot(String) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Pivots a column of the current DataFrame and performs the specified aggregation.
pivot(String, Seq<Object>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Pivots a column of the current DataFrame and performs the specified aggregation.
pivot(String, List<Object>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
(Java-specific) Pivots a column of the current DataFrame and performs the specified aggregation.
pivot(Column) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Pivots a column of the current DataFrame and performs the specified aggregation.
pivot(Column, Seq<Object>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Pivots a column of the current DataFrame and performs the specified aggregation.
pivot(Column, List<Object>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
(Java-specific) Pivots a column of the current DataFrame and performs the specified aggregation.
PivotType$() - 类 的构造器org.apache.spark.sql.RelationalGroupedDataset.PivotType$
 
plan() - 异常错误 中的方法org.apache.spark.sql.AnalysisException
 
PlanDynamicPruningFilters - org.apache.spark.sql.dynamicpruning中的类
This planner rule aims at rewriting dynamic pruning predicates in order to reuse the results of broadcast.
PlanDynamicPruningFilters(SparkSession) - 类 的构造器org.apache.spark.sql.dynamicpruning.PlanDynamicPruningFilters
 
planInputPartitions() - 接口 中的方法org.apache.spark.sql.connector.read.Batch
Returns a list of input partitions.
planInputPartitions(Offset) - 接口 中的方法org.apache.spark.sql.connector.read.streaming.ContinuousStream
Returns a list of input partitions given the start offset.
planInputPartitions(Offset, Offset) - 接口 中的方法org.apache.spark.sql.connector.read.streaming.MicroBatchStream
Returns a list of input partitions given the start and end offsets.
plus(Object) - 类 中的方法org.apache.spark.sql.Column
Sum of this expression and another expression.
plus(byte, byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
plus(Decimal, Decimal) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
plus(Decimal, Decimal) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
plus(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
plus(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
plus(int, int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
plus(long, long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
plus(short, short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
plus(Duration) - 类 中的方法org.apache.spark.streaming.Duration
 
plus(Duration) - 类 中的方法org.apache.spark.streaming.Time
 
pmml() - 接口 中的方法org.apache.spark.mllib.pmml.export.PMMLModelExport
Holder of the exported model in PMML format
PMMLExportable - org.apache.spark.mllib.pmml中的接口
:: DeveloperApi :: Export model to the PMML format Predictive Model Markup Language (PMML) is an XML-based file format developed by the Data Mining Group (www.dmg.org).
PMMLKMeansModelWriter - org.apache.spark.ml.clustering中的类
A writer for KMeans that handles the "pmml" format
PMMLKMeansModelWriter() - 类 的构造器org.apache.spark.ml.clustering.PMMLKMeansModelWriter
 
PMMLLinearRegressionModelWriter - org.apache.spark.ml.regression中的类
A writer for LinearRegression that handles the "pmml" format
PMMLLinearRegressionModelWriter() - 类 的构造器org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter
 
PMMLModelExport - org.apache.spark.mllib.pmml.export中的接口
 
PMMLModelExportFactory - org.apache.spark.mllib.pmml.export中的类
 
PMMLModelExportFactory() - 类 的构造器org.apache.spark.mllib.pmml.export.PMMLModelExportFactory
 
pmod(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the positive value of dividend mod divisor.
point() - 类 中的方法org.apache.spark.mllib.feature.VocabWord
 
POINTS() - 类 中的静态方法org.apache.spark.mllib.clustering.StreamingKMeans
 
pointSilhouetteCoefficient(Set<Object>, double, long, Function1<Object, Object>) - 类 中的静态方法org.apache.spark.ml.evaluation.CosineSilhouette
 
pointSilhouetteCoefficient(Set<Object>, double, long, Function1<Object, Object>) - 类 中的静态方法org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
 
POISON_PILL() - 类 中的静态方法org.apache.spark.scheduler.AsyncEventQueue
 
PoisonPill() - 类 中的静态方法org.apache.spark.rpc.netty.MessageLoop
A poison inbox that indicates the message loop should stop processing messages.
Poisson$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
 
PoissonBounds - org.apache.spark.util.random中的类
Utility functions that help us determine bounds on adjusted sampling rate to guarantee exact sample sizes with high confidence when sampling with replacement.
PoissonBounds() - 类 的构造器org.apache.spark.util.random.PoissonBounds
 
PoissonGenerator - org.apache.spark.mllib.random中的类
:: DeveloperApi :: Generates i.i.d. samples from the Poisson distribution with the given mean.
PoissonGenerator(double) - 类 的构造器org.apache.spark.mllib.random.PoissonGenerator
 
poissonJavaRDD(JavaSparkContext, double, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.poissonRDD.
poissonJavaRDD(JavaSparkContext, double, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.poissonJavaRDD with the default seed.
poissonJavaRDD(JavaSparkContext, double, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.poissonJavaRDD with the default number of partitions and the default seed.
poissonJavaVectorRDD(JavaSparkContext, double, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.poissonVectorRDD.
poissonJavaVectorRDD(JavaSparkContext, double, long, int, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.poissonJavaVectorRDD with the default seed.
poissonJavaVectorRDD(JavaSparkContext, double, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.poissonJavaVectorRDD with the default number of partitions and the default seed.
poissonRDD(SparkContext, double, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD comprised of i.i.d.
PoissonSampler<T> - org.apache.spark.util.random中的类
:: DeveloperApi :: A sampler for sampling with replacement, based on values drawn from Poisson distribution.
PoissonSampler(double, boolean) - 类 的构造器org.apache.spark.util.random.PoissonSampler
 
PoissonSampler(double) - 类 的构造器org.apache.spark.util.random.PoissonSampler
 
poissonVectorRDD(SparkContext, double, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD[Vector] with vectors containing i.i.d.
PolynomialExpansion - org.apache.spark.ml.feature中的类
Perform feature expansion in a polynomial space.
PolynomialExpansion(String) - 类 的构造器org.apache.spark.ml.feature.PolynomialExpansion
 
PolynomialExpansion() - 类 的构造器org.apache.spark.ml.feature.PolynomialExpansion
 
pool() - 类 中的方法org.apache.spark.serializer.KryoSerializer
 
popStdev() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Compute the population standard deviation of this RDD's elements.
popStdev() - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Compute the population standard deviation of this RDD's elements.
popStdev() - 类 中的方法org.apache.spark.util.StatCounter
Return the population standard deviation of the values.
popVariance() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Compute the population variance of this RDD's elements.
popVariance() - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Compute the population variance of this RDD's elements.
popVariance() - 类 中的方法org.apache.spark.util.StatCounter
Return the population variance of the values.
port() - 接口 中的方法org.apache.spark.SparkExecutorInfo
 
port() - 类 中的方法org.apache.spark.SparkExecutorInfoImpl
 
port() - 类 中的方法org.apache.spark.storage.BlockManagerId
 
PortableDataStream - org.apache.spark.input中的类
A class that allows DataStreams to be serialized and moved around by not creating them until they need to be read
PortableDataStream(CombineFileSplit, TaskAttemptContext, Integer) - 类 的构造器org.apache.spark.input.PortableDataStream
 
portMaxRetries(SparkConf) - 类 中的静态方法org.apache.spark.util.Utils
Maximum number of retries when binding to a port before giving up.
posexplode(Column) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new row for each element with position in the given array or map column.
posexplode_outer(Column) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new row for each element with position in the given array or map column.
position() - 类 中的方法org.apache.spark.storage.ReadableChannelFileRegion
 
positioned(Function0<Parsers.Parser<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
post(String, InboxMessage) - 类 中的方法org.apache.spark.rpc.netty.DedicatedMessageLoop
 
post(String, InboxMessage) - 类 中的方法org.apache.spark.rpc.netty.MessageLoop
 
post(String, InboxMessage) - 类 中的方法org.apache.spark.rpc.netty.SharedMessageLoop
 
post(SparkListenerEvent) - 类 中的方法org.apache.spark.scheduler.AsyncEventQueue
 
Postfix$() - 类 的构造器org.apache.spark.mllib.fpm.PrefixSpan.Postfix$
 
PostgresDialect - org.apache.spark.sql.jdbc中的类
 
PostgresDialect() - 类 的构造器org.apache.spark.sql.jdbc.PostgresDialect
 
postStartHook() - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
postToAll(E) - 接口 中的方法org.apache.spark.util.ListenerBus
Post the event to all registered listeners.
pow(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the first argument raised to the power of the second argument.
pow(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the first argument raised to the power of the second argument.
pow(String, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the first argument raised to the power of the second argument.
pow(String, String) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the first argument raised to the power of the second argument.
pow(Column, double) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the first argument raised to the power of the second argument.
pow(String, double) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the first argument raised to the power of the second argument.
pow(double, Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the first argument raised to the power of the second argument.
pow(double, String) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the first argument raised to the power of the second argument.
PowerIterationClustering - org.apache.spark.ml.clustering中的类
Power Iteration Clustering (PIC), a scalable graph clustering algorithm developed by Lin and Cohen.
PowerIterationClustering() - 类 的构造器org.apache.spark.ml.clustering.PowerIterationClustering
 
PowerIterationClustering - org.apache.spark.mllib.clustering中的类
Power Iteration Clustering (PIC), a scalable graph clustering algorithm developed by Lin and Cohen.
PowerIterationClustering() - 类 的构造器org.apache.spark.mllib.clustering.PowerIterationClustering
Constructs a PIC instance with default parameters: {k: 2, maxIterations: 100, initMode: "random"}.
PowerIterationClustering.Assignment - org.apache.spark.mllib.clustering中的类
Cluster assignment.
PowerIterationClustering.Assignment$ - org.apache.spark.mllib.clustering中的类
 
PowerIterationClusteringModel - org.apache.spark.mllib.clustering中的类
Model produced by PowerIterationClustering.
PowerIterationClusteringModel(int, RDD<PowerIterationClustering.Assignment>) - 类 的构造器org.apache.spark.mllib.clustering.PowerIterationClusteringModel
 
PowerIterationClusteringModel.SaveLoadV1_0$ - org.apache.spark.mllib.clustering中的类
 
PowerIterationClusteringParams - org.apache.spark.ml.clustering中的接口
Common params for PowerIterationClustering
PowerIterationClusteringWrapper - org.apache.spark.ml.r中的类
 
PowerIterationClusteringWrapper() - 类 的构造器org.apache.spark.ml.r.PowerIterationClusteringWrapper
 
pr() - 接口 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
Returns the precision-recall curve, which is a Dataframe containing two fields recall, precision with (0.0, 1.0) prepended to it.
pr() - 类 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
 
pr() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Returns the precision-recall curve, which is an RDD of (recall, precision), NOT (precision, recall), with (0.0, p) prepended to it, where p is the precision associated with the lowest recall on the curve.
preciseSize() - 接口 中的方法org.apache.spark.storage.memory.MemoryEntryBuilder
 
Precision - org.apache.spark.mllib.evaluation.binary中的类
Precision.
Precision() - 类 的构造器org.apache.spark.mllib.evaluation.binary.Precision
 
precision(double) - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns precision for a given label (category)
precision() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns document-based precision averaged by the number of documents
precision(double) - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns precision for a given label (category)
precision() - 类 中的方法org.apache.spark.sql.types.Decimal
 
precision() - 类 中的方法org.apache.spark.sql.types.DecimalType
 
precisionAt(int) - 类 中的方法org.apache.spark.mllib.evaluation.RankingMetrics
Compute the average precision of all the queries, truncated at ranking position k.
precisionByLabel() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns precision for each label (category).
precisionByThreshold() - 接口 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
Returns a dataframe with two fields (threshold, precision) curve.
precisionByThreshold() - 类 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
 
precisionByThreshold() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Returns the (threshold, precision) curve.
predict(Vector) - 接口 中的方法org.apache.spark.ml.ann.TopologyModel
Prediction of the model.
predict(FeaturesType) - 类 中的方法org.apache.spark.ml.classification.ClassificationModel
Predict label for the given features.
predict(Vector) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
Predict label for the given feature vector.
predict(Vector) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
Predict label for the given features.
predict(Vector) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
predict(FeaturesType) - 类 中的方法org.apache.spark.ml.PredictionModel
Predict label for the given features.
predict(Vector) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
predict(Vector) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
predict(RDD<Vector>) - 接口 中的方法org.apache.spark.mllib.classification.ClassificationModel
Predict values for the given data set using the model trained.
predict(Vector) - 接口 中的方法org.apache.spark.mllib.classification.ClassificationModel
Predict values for a single data point using the model trained.
predict(JavaRDD<Vector>) - 接口 中的方法org.apache.spark.mllib.classification.ClassificationModel
Predict values for examples stored in a JavaRDD.
predict(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel
 
predict(Vector) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel
 
predict(Vector) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
Predicts the index of the cluster that the input point belongs to.
predict(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
Predicts the indices of the clusters that the input points belong to.
predict(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
Java-friendly version of predict().
predict(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixtureModel
Maps given points to their cluster indices.
predict(Vector) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixtureModel
Maps given point to its cluster index.
predict(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixtureModel
Java-friendly version of predict()
predict(Vector) - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel
Returns the cluster index that a given point belongs to.
predict(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel
Maps given points to their cluster indices.
predict(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel
Maps given points to their cluster indices.
predict(int, int) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
Predict the rating of one user for one product.
predict(RDD<Tuple2<Object, Object>>) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
Predict the rating of many users for many products.
predict(JavaPairRDD<Integer, Integer>) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
Java-friendly version of MatrixFactorizationModel.predict.
predict(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearModel
Predict values for the given data set using the model trained.
predict(Vector) - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearModel
Predict values for a single data point using the model trained.
predict(RDD<Object>) - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegressionModel
Predict labels for provided features.
predict(JavaDoubleRDD) - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegressionModel
Predict labels for provided features.
predict(double) - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegressionModel
Predict a single label.
predict(RDD<Vector>) - 接口 中的方法org.apache.spark.mllib.regression.RegressionModel
Predict values for the given data set using the model trained.
predict(Vector) - 接口 中的方法org.apache.spark.mllib.regression.RegressionModel
Predict values for a single data point using the model trained.
predict(JavaRDD<Vector>) - 接口 中的方法org.apache.spark.mllib.regression.RegressionModel
Predict values for examples stored in a JavaRDD.
predict(Vector) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
Predict values for a single data point using the model trained.
predict(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
Predict values for the given data set using the model trained.
predict(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
Predict values for the given data set using the model trained.
predict() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
predict() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
 
predict() - 类 中的方法org.apache.spark.mllib.tree.model.Node
 
predict(Vector) - 类 中的方法org.apache.spark.mllib.tree.model.Node
predict value if node is not leaf
Predict - org.apache.spark.mllib.tree.model中的类
:: DeveloperApi :: Predicted value for a node param: predict predicted value param: prob probability of the label (classification only)
Predict(double, double) - 类 的构造器org.apache.spark.mllib.tree.model.Predict
 
predict() - 类 中的方法org.apache.spark.mllib.tree.model.Predict
 
PredictData(double, double) - 类 的构造器org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
 
PredictData$() - 类 的构造器org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
 
prediction() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
prediction() - 类 中的方法org.apache.spark.ml.tree.InternalNode
 
prediction() - 类 中的方法org.apache.spark.ml.tree.LeafNode
 
prediction() - 类 中的方法org.apache.spark.ml.tree.Node
Prediction a leaf node makes, or which an internal node would make if it were a leaf node
predictionCol() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Field in "predictions" which gives the prediction of each class.
predictionCol() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
 
predictionCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
predictionCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
predictionCol() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
predictionCol() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
predictionCol() - 类 中的方法org.apache.spark.ml.clustering.ClusteringSummary
 
predictionCol() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
predictionCol() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
predictionCol() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
predictionCol() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
predictionCol() - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
predictionCol() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
predictionCol() - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
predictionCol() - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
predictionCol() - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
predictionCol() - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
predictionCol() - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
predictionCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasPredictionCol
Param for prediction column name.
predictionCol() - 类 中的方法org.apache.spark.ml.PredictionModel
 
predictionCol() - 类 中的方法org.apache.spark.ml.Predictor
 
predictionCol() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
predictionCol() - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
predictionCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
predictionCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
predictionCol() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
Field in "predictions" which gives the predicted value of each instance.
predictionCol() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
predictionCol() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
predictionCol() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
PredictionModel<FeaturesType,M extends PredictionModel<FeaturesType,M>> - org.apache.spark.ml中的类
:: DeveloperApi :: Abstraction for a model for prediction tasks (regression and classification).
PredictionModel() - 类 的构造器org.apache.spark.ml.PredictionModel
 
predictions() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Dataframe output by the model's transform method.
predictions() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
 
predictions() - 类 中的方法org.apache.spark.ml.clustering.ClusteringSummary
 
predictions() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
Predictions output by the model's transform method.
predictions() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
Predictions associated with the boundaries at the same index, monotone because of isotonic regression.
predictions() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
predictions() - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegressionModel
 
predictLeaf(Vector) - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeModel
 
predictLeaf(Vector) - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleModel
 
predictOn(DStream<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Use the clustering model to make predictions on batches of data from a DStream.
predictOn(JavaDStream<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Java-friendly version of predictOn.
predictOn(DStream<Vector>) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearAlgorithm
Use the model to make predictions on batches of data from a DStream
predictOn(JavaDStream<Vector>) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearAlgorithm
Java-friendly version of predictOn.
predictOnValues(DStream<Tuple2<K, Vector>>, ClassTag<K>) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Use the model to make predictions on the values of a DStream and carry over its keys.
predictOnValues(JavaPairDStream<K, Vector>) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Java-friendly version of predictOnValues.
predictOnValues(DStream<Tuple2<K, Vector>>, ClassTag<K>) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearAlgorithm
Use the model to make predictions on the values of a DStream and carry over its keys.
predictOnValues(JavaPairDStream<K, Vector>) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearAlgorithm
Java-friendly version of predictOnValues.
Predictor<FeaturesType,Learner extends Predictor<FeaturesType,Learner,M>,M extends PredictionModel<FeaturesType,M>> - org.apache.spark.ml中的类
:: DeveloperApi :: Abstraction for prediction problems (regression and classification).
Predictor() - 类 的构造器org.apache.spark.ml.Predictor
 
PredictorParams - org.apache.spark.ml中的接口
(private[ml]) Trait for parameters for prediction (regression and classification).
predictProbabilities(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel
Predict values for the given data set using the model trained.
predictProbabilities(Vector) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel
Predict posterior class probabilities for a single data point using the model trained.
predictProbability(Vector) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
predictQuantiles(Vector) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
predictRaw(Vector) - 接口 中的方法org.apache.spark.ml.ann.TopologyModel
Raw prediction of the model.
predictSoft(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixtureModel
Given the input vectors, return the membership value of each vector to all mixture components.
predictSoft(Vector) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixtureModel
Given the input vector, return the membership values to all mixture components.
PREFER_CONFIGURED_MASTER_ADDRESS() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
preferredLocation() - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Override this to specify a preferred location (hostname).
preferredLocations(Partition) - 类 中的方法org.apache.spark.rdd.RDD
Get the preferred locations of a partition, taking into account whether the RDD is checkpointed.
preferredLocations() - 接口 中的方法org.apache.spark.sql.connector.read.InputPartition
The preferred locations where the input partition reader returned by this partition can run faster, but Spark does not guarantee to run the input partition reader on these locations.
Prefix$() - 类 的构造器org.apache.spark.mllib.fpm.PrefixSpan.Prefix$
 
prefixesToRewrite() - 类 中的方法org.apache.spark.ml.feature.VectorAttributeRewriter
 
PrefixSpan - org.apache.spark.ml.fpm中的类
A parallel PrefixSpan algorithm to mine frequent sequential patterns.
PrefixSpan(String) - 类 的构造器org.apache.spark.ml.fpm.PrefixSpan
 
PrefixSpan() - 类 的构造器org.apache.spark.ml.fpm.PrefixSpan
 
PrefixSpan - org.apache.spark.mllib.fpm中的类
A parallel PrefixSpan algorithm to mine frequent sequential patterns.
PrefixSpan() - 类 的构造器org.apache.spark.mllib.fpm.PrefixSpan
Constructs a default instance with default parameters {minSupport: 0.1, maxPatternLength: 10, maxLocalProjDBSize: 32000000L}.
PrefixSpan.FreqSequence<Item> - org.apache.spark.mllib.fpm中的类
Represents a frequent sequence.
PrefixSpan.Postfix$ - org.apache.spark.mllib.fpm中的类
 
PrefixSpan.Prefix$ - org.apache.spark.mllib.fpm中的类
 
PrefixSpanModel<Item> - org.apache.spark.mllib.fpm中的类
Model fitted by PrefixSpan param: freqSequences frequent sequences
PrefixSpanModel(RDD<PrefixSpan.FreqSequence<Item>>) - 类 的构造器org.apache.spark.mllib.fpm.PrefixSpanModel
 
PrefixSpanModel.SaveLoadV1_0$ - org.apache.spark.mllib.fpm中的类
 
PrefixSpanWrapper - org.apache.spark.ml.r中的类
 
PrefixSpanWrapper() - 类 的构造器org.apache.spark.ml.r.PrefixSpanWrapper
 
prefLoc() - 类 中的方法org.apache.spark.rdd.PartitionGroup
 
pregel(A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<A>) - 类 中的方法org.apache.spark.graphx.GraphOps
Execute a Pregel-like iterative vertex-parallel abstraction.
Pregel - org.apache.spark.graphx中的类
Implements a Pregel-like bulk-synchronous message-passing API.
Pregel() - 类 的构造器org.apache.spark.graphx.Pregel
 
prepareWritable(Writable, Seq<Tuple2<String, String>>) - 类 中的静态方法org.apache.spark.sql.hive.HiveShim
 
prepareWrite(SparkSession, Job, Map<String, String>, StructType) - 类 中的方法org.apache.spark.sql.hive.execution.HiveFileFormat
 
prepareWrite(SparkSession, Job, Map<String, String>, StructType) - 类 中的方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
prependBaseUri(HttpServletRequest, String, String) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
prettyJson() - 接口 中的方法org.apache.spark.sql.Row
The pretty (i.e. indented) JSON representation of this row.
prettyJson() - 类 中的方法org.apache.spark.sql.streaming.SinkProgress
The pretty (i.e. indented) JSON representation of this progress.
prettyJson() - 类 中的方法org.apache.spark.sql.streaming.SourceProgress
The pretty (i.e. indented) JSON representation of this progress.
prettyJson() - 类 中的方法org.apache.spark.sql.streaming.StateOperatorProgress
The pretty (i.e. indented) JSON representation of this progress.
prettyJson() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
The pretty (i.e. indented) JSON representation of this progress.
prettyJson() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryStatus
The pretty (i.e. indented) JSON representation of this status.
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
prettyJson() - 类 中的方法org.apache.spark.sql.types.DataType
The pretty (i.e. indented) JSON representation of this data type.
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.DateType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.LongType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.NullType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.StringType
 
prettyJson() - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
prettyPrint() - 类 中的方法org.apache.spark.streaming.Duration
 
prev() - 类 中的方法org.apache.spark.rdd.ShuffledRDD
 
prev() - 类 中的方法org.apache.spark.status.LiveRDDPartition
 
print() - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Print the first ten elements of each RDD generated in this DStream.
print(int) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Print the first num elements of each RDD generated in this DStream.
print() - 类 中的方法org.apache.spark.streaming.dstream.DStream
Print the first ten elements of each RDD generated in this DStream.
print(int) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Print the first num elements of each RDD generated in this DStream.
printErrorAndExit(String) - 接口 中的方法org.apache.spark.util.CommandLineLoggingUtils
 
printMessage(String) - 接口 中的方法org.apache.spark.util.CommandLineLoggingUtils
 
printSchema() - 类 中的方法org.apache.spark.sql.Dataset
Prints the schema to the console in a nice tree format.
printSchema(int) - 类 中的方法org.apache.spark.sql.Dataset
Prints the schema up to the given level to the console in a nice tree format.
printStats() - 类 中的方法org.apache.spark.streaming.scheduler.StatsReportListener
 
printStream() - 接口 中的方法org.apache.spark.util.CommandLineLoggingUtils
 
printTreeString() - 类 中的方法org.apache.spark.sql.types.StructType
 
prioritize(BlockManagerId, Seq<BlockManagerId>, HashSet<BlockManagerId>, BlockId, int) - 类 中的方法org.apache.spark.storage.BasicBlockReplicationPolicy
Method to prioritize a bunch of candidate peers of a block manager.
prioritize(BlockManagerId, Seq<BlockManagerId>, HashSet<BlockManagerId>, BlockId, int) - 接口 中的方法org.apache.spark.storage.BlockReplicationPolicy
Method to prioritize a bunch of candidate peers of a block
prioritize(BlockManagerId, Seq<BlockManagerId>, HashSet<BlockManagerId>, BlockId, int) - 类 中的方法org.apache.spark.storage.RandomBlockReplicationPolicy
Method to prioritize a bunch of candidate peers of a block.
priority() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
prob() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
 
prob() - 类 中的方法org.apache.spark.mllib.tree.model.Predict
 
ProbabilisticClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType,M>> - org.apache.spark.ml.classification中的类
:: DeveloperApi :: Model produced by a ProbabilisticClassifier.
ProbabilisticClassificationModel() - 类 的构造器org.apache.spark.ml.classification.ProbabilisticClassificationModel
 
ProbabilisticClassifier<FeaturesType,E extends ProbabilisticClassifier<FeaturesType,E,M>,M extends ProbabilisticClassificationModel<FeaturesType,M>> - org.apache.spark.ml.classification中的类
:: DeveloperApi :: Single-label binary or multiclass classifier which can output class conditional probabilities.
ProbabilisticClassifier() - 类 的构造器org.apache.spark.ml.classification.ProbabilisticClassifier
 
ProbabilisticClassifierParams - org.apache.spark.ml.classification中的接口
(private[classification]) Params for probabilistic classification.
probabilities() - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
probability() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureSummary
 
probabilityCol() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Field in "predictions" which gives the probability of each class as a vector.
probabilityCol() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
 
probabilityCol() - 类 中的方法org.apache.spark.ml.classification.ProbabilisticClassificationModel
 
probabilityCol() - 类 中的方法org.apache.spark.ml.classification.ProbabilisticClassifier
 
probabilityCol() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
probabilityCol() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
probabilityCol() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureSummary
 
probabilityCol() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
probabilityCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasProbabilityCol
Param for Column name for predicted class conditional probabilities.
Probit$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
 
process(T) - 类 中的方法org.apache.spark.sql.ForeachWriter
Called to process the data in the executor side.
PROCESS_LOCAL() - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
processAllAvailable() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Blocks until all available data in the source has been processed and committed to the sink.
processedRowsPerSecond() - 类 中的方法org.apache.spark.sql.streaming.SourceProgress
 
processedRowsPerSecond() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
The aggregate (across all sources) rate at which Spark is processing data.
processingDelay() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
Time taken for the all jobs of this batch to finish processing from the time they started processing.
processingEndTime() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
 
processingStartTime() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
 
ProcessingTime(long) - 类 中的静态方法org.apache.spark.sql.streaming.Trigger
A trigger policy that runs a query periodically based on an interval in processing time.
ProcessingTime(long, TimeUnit) - 类 中的静态方法org.apache.spark.sql.streaming.Trigger
(Java-friendly) A trigger policy that runs a query periodically based on an interval in processing time.
ProcessingTime(Duration) - 类 中的静态方法org.apache.spark.sql.streaming.Trigger
(Scala-friendly) A trigger policy that runs a query periodically based on an interval in processing time.
ProcessingTime(String) - 类 中的静态方法org.apache.spark.sql.streaming.Trigger
A trigger policy that runs a query periodically based on an interval in processing time.
processingTime() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
ProcessingTimeTimeout() - 类 中的静态方法org.apache.spark.sql.streaming.GroupStateTimeout
Timeout based on processing time.
processStreamByLine(String, InputStream, Function1<String, BoxedUnit>) - 类 中的静态方法org.apache.spark.util.Utils
Return and start a daemon thread that processes the content of the input stream line by line.
ProcessTreeMetrics - org.apache.spark.metrics中的类
 
ProcessTreeMetrics() - 类 的构造器org.apache.spark.metrics.ProcessTreeMetrics
 
producedAttributes() - 类 中的方法org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
product() - 类 中的方法org.apache.spark.mllib.recommendation.Rating
 
product(TypeTags.TypeTag<T>) - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for Scala's product type (tuples, case classes, etc).
productArity() - 类 中的静态方法org.apache.spark.ExpireDeadHosts
 
productArity() - 类 中的静态方法org.apache.spark.metrics.DirectPoolMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.GarbageCollectionMetrics
 
productArity() - 类 中的静态方法org.apache.spark.metrics.JVMHeapMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.JVMOffHeapMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.MappedPoolMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.OffHeapExecutionMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.OffHeapStorageMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.OffHeapUnifiedMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.OnHeapExecutionMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.OnHeapStorageMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.OnHeapUnifiedMemory
 
productArity() - 类 中的静态方法org.apache.spark.metrics.ProcessTreeMetrics
 
productArity() - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
productArity() - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
productArity() - 类 中的静态方法org.apache.spark.Resubmitted
 
productArity() - 类 中的静态方法org.apache.spark.rpc.netty.OnStart
 
productArity() - 类 中的静态方法org.apache.spark.rpc.netty.OnStop
 
productArity() - 类 中的静态方法org.apache.spark.scheduler.AllJobsCancelled
 
productArity() - 类 中的静态方法org.apache.spark.scheduler.JobSucceeded
 
productArity() - 类 中的静态方法org.apache.spark.scheduler.ResubmitFailedStages
 
productArity() - 类 中的静态方法org.apache.spark.scheduler.StopCoordinator
 
productArity() - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
productArity() - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
productArity() - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
productArity() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysFalse
 
productArity() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysTrue
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.DateType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.LongType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.NullType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.StringType
 
productArity() - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
productArity() - 类 中的静态方法org.apache.spark.StopMapOutputTracker
 
productArity() - 类 中的静态方法org.apache.spark.streaming.kinesis.DefaultCredentials
 
productArity() - 类 中的静态方法org.apache.spark.streaming.scheduler.AllReceiverIds
 
productArity() - 类 中的静态方法org.apache.spark.streaming.scheduler.GetAllReceiverInfo
 
productArity() - 类 中的静态方法org.apache.spark.streaming.scheduler.StopAllReceivers
 
productArity() - 类 中的静态方法org.apache.spark.Success
 
productArity() - 类 中的静态方法org.apache.spark.TaskResultLost
 
productArity() - 类 中的静态方法org.apache.spark.TaskSchedulerIsSet
 
productArity() - 类 中的静态方法org.apache.spark.UnknownReason
 
productElement(int) - 类 中的静态方法org.apache.spark.ExpireDeadHosts
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.DirectPoolMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.GarbageCollectionMetrics
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.JVMHeapMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.JVMOffHeapMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.MappedPoolMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.OffHeapExecutionMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.OffHeapStorageMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.OffHeapUnifiedMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.OnHeapExecutionMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.OnHeapStorageMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.OnHeapUnifiedMemory
 
productElement(int) - 类 中的静态方法org.apache.spark.metrics.ProcessTreeMetrics
 
productElement(int) - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
productElement(int) - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
productElement(int) - 类 中的静态方法org.apache.spark.Resubmitted
 
productElement(int) - 类 中的静态方法org.apache.spark.rpc.netty.OnStart
 
productElement(int) - 类 中的静态方法org.apache.spark.rpc.netty.OnStop
 
productElement(int) - 类 中的静态方法org.apache.spark.scheduler.AllJobsCancelled
 
productElement(int) - 类 中的静态方法org.apache.spark.scheduler.JobSucceeded
 
productElement(int) - 类 中的静态方法org.apache.spark.scheduler.ResubmitFailedStages
 
productElement(int) - 类 中的静态方法org.apache.spark.scheduler.StopCoordinator
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.sources.AlwaysFalse
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.sources.AlwaysTrue
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.DateType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.LongType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.NullType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.StringType
 
productElement(int) - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
productElement(int) - 类 中的静态方法org.apache.spark.StopMapOutputTracker
 
productElement(int) - 类 中的静态方法org.apache.spark.streaming.kinesis.DefaultCredentials
 
productElement(int) - 类 中的静态方法org.apache.spark.streaming.scheduler.AllReceiverIds
 
productElement(int) - 类 中的静态方法org.apache.spark.streaming.scheduler.GetAllReceiverInfo
 
productElement(int) - 类 中的静态方法org.apache.spark.streaming.scheduler.StopAllReceivers
 
productElement(int) - 类 中的静态方法org.apache.spark.Success
 
productElement(int) - 类 中的静态方法org.apache.spark.TaskResultLost
 
productElement(int) - 类 中的静态方法org.apache.spark.TaskSchedulerIsSet
 
productElement(int) - 类 中的静态方法org.apache.spark.UnknownReason
 
productFeatures() - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
 
productIterator() - 类 中的静态方法org.apache.spark.ExpireDeadHosts
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.DirectPoolMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.GarbageCollectionMetrics
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.JVMHeapMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.JVMOffHeapMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.MappedPoolMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.OffHeapExecutionMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.OffHeapStorageMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.OffHeapUnifiedMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.OnHeapExecutionMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.OnHeapStorageMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.OnHeapUnifiedMemory
 
productIterator() - 类 中的静态方法org.apache.spark.metrics.ProcessTreeMetrics
 
productIterator() - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
productIterator() - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
productIterator() - 类 中的静态方法org.apache.spark.Resubmitted
 
productIterator() - 类 中的静态方法org.apache.spark.rpc.netty.OnStart
 
productIterator() - 类 中的静态方法org.apache.spark.rpc.netty.OnStop
 
productIterator() - 类 中的静态方法org.apache.spark.scheduler.AllJobsCancelled
 
productIterator() - 类 中的静态方法org.apache.spark.scheduler.JobSucceeded
 
productIterator() - 类 中的静态方法org.apache.spark.scheduler.ResubmitFailedStages
 
productIterator() - 类 中的静态方法org.apache.spark.scheduler.StopCoordinator
 
productIterator() - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
productIterator() - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
productIterator() - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
productIterator() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysFalse
 
productIterator() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysTrue
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.DateType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.LongType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.NullType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.StringType
 
productIterator() - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
productIterator() - 类 中的静态方法org.apache.spark.StopMapOutputTracker
 
productIterator() - 类 中的静态方法org.apache.spark.streaming.kinesis.DefaultCredentials
 
productIterator() - 类 中的静态方法org.apache.spark.streaming.scheduler.AllReceiverIds
 
productIterator() - 类 中的静态方法org.apache.spark.streaming.scheduler.GetAllReceiverInfo
 
productIterator() - 类 中的静态方法org.apache.spark.streaming.scheduler.StopAllReceivers
 
productIterator() - 类 中的静态方法org.apache.spark.Success
 
productIterator() - 类 中的静态方法org.apache.spark.TaskResultLost
 
productIterator() - 类 中的静态方法org.apache.spark.TaskSchedulerIsSet
 
productIterator() - 类 中的静态方法org.apache.spark.UnknownReason
 
productPrefix() - 类 中的静态方法org.apache.spark.ExpireDeadHosts
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.DirectPoolMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.GarbageCollectionMetrics
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.JVMHeapMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.JVMOffHeapMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.MappedPoolMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.OffHeapExecutionMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.OffHeapStorageMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.OffHeapUnifiedMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.OnHeapExecutionMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.OnHeapStorageMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.OnHeapUnifiedMemory
 
productPrefix() - 类 中的静态方法org.apache.spark.metrics.ProcessTreeMetrics
 
productPrefix() - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
productPrefix() - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
productPrefix() - 类 中的静态方法org.apache.spark.Resubmitted
 
productPrefix() - 类 中的静态方法org.apache.spark.rpc.netty.OnStart
 
productPrefix() - 类 中的静态方法org.apache.spark.rpc.netty.OnStop
 
productPrefix() - 类 中的静态方法org.apache.spark.scheduler.AllJobsCancelled
 
productPrefix() - 类 中的静态方法org.apache.spark.scheduler.JobSucceeded
 
productPrefix() - 类 中的静态方法org.apache.spark.scheduler.ResubmitFailedStages
 
productPrefix() - 类 中的静态方法org.apache.spark.scheduler.StopCoordinator
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysFalse
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysTrue
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.DateType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.LongType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.NullType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.StringType
 
productPrefix() - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
productPrefix() - 类 中的静态方法org.apache.spark.StopMapOutputTracker
 
productPrefix() - 类 中的静态方法org.apache.spark.streaming.kinesis.DefaultCredentials
 
productPrefix() - 类 中的静态方法org.apache.spark.streaming.scheduler.AllReceiverIds
 
productPrefix() - 类 中的静态方法org.apache.spark.streaming.scheduler.GetAllReceiverInfo
 
productPrefix() - 类 中的静态方法org.apache.spark.streaming.scheduler.StopAllReceivers
 
productPrefix() - 类 中的静态方法org.apache.spark.Success
 
productPrefix() - 类 中的静态方法org.apache.spark.TaskResultLost
 
productPrefix() - 类 中的静态方法org.apache.spark.TaskSchedulerIsSet
 
productPrefix() - 类 中的静态方法org.apache.spark.UnknownReason
 
progress() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent
 
project(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
 
project(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
 
properties() - 类 中的方法org.apache.spark.scheduler.SparkListenerJobStart
 
properties() - 类 中的方法org.apache.spark.scheduler.SparkListenerStageSubmitted
 
properties() - 接口 中的方法org.apache.spark.sql.connector.catalog.Table
Returns the string map of table properties.
propertiesFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
propertiesToJson(Properties) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
property() - 类 中的方法org.apache.spark.sql.connector.catalog.NamespaceChange.RemoveProperty
 
property() - 类 中的方法org.apache.spark.sql.connector.catalog.NamespaceChange.SetProperty
 
property() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.RemoveProperty
 
property() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.SetProperty
 
PROVIDER() - 类 中的静态方法org.apache.spark.internal.config.History
 
provider() - 类 中的静态方法org.apache.spark.streaming.kinesis.DefaultCredentials
 
provider() - 接口 中的方法org.apache.spark.streaming.kinesis.SparkAWSCredentials
Return an AWSCredentialProvider instance that can be used by the Kinesis Client Library to authenticate to AWS services (Kinesis, CloudWatch and DynamoDB).
proxyBase() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
 
pruneColumns(StructType) - 接口 中的方法org.apache.spark.sql.connector.read.SupportsPushDownRequiredColumns
Applies column pruning w.r.t. the given requiredSchema.
PrunedFilteredScan - org.apache.spark.sql.sources中的接口
A BaseRelation that can eliminate unneeded columns and filter using selected predicates before producing an RDD containing all matching tuples as Row objects.
PrunedScan - org.apache.spark.sql.sources中的接口
A BaseRelation that can eliminate unneeded columns before producing an RDD containing all of its tuples as Row objects.
Pseudorandom - org.apache.spark.util.random中的接口
:: DeveloperApi :: A class with pseudorandom behavior.
pushedFilters() - 接口 中的方法org.apache.spark.sql.connector.read.SupportsPushDownFilters
Returns the filters that are pushed to the data source via SupportsPushDownFilters.pushFilters(Filter[]).
pushFilters(Filter[]) - 接口 中的方法org.apache.spark.sql.connector.read.SupportsPushDownFilters
Pushes down filters, and returns filters that need to be evaluated after scanning.
put(ParamPair<?>...) - 类 中的方法org.apache.spark.ml.param.ParamMap
Puts a list of param pairs (overwrites if the input params exists).
put(Param<T>, T) - 类 中的方法org.apache.spark.ml.param.ParamMap
Puts a (param, value) pair (overwrites if the input param exists).
put(Seq<ParamPair<?>>) - 类 中的方法org.apache.spark.ml.param.ParamMap
Puts a list of param pairs (overwrites if the input params exists).
put(String, String) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
put(Object) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
Puts an item into this BloomFilter.
putAll(Map<? extends String, ? extends String>) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
putBinary(byte[]) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
A specialized variant of BloomFilter.put(Object) that only supports byte array items.
putBoolean(String, boolean) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a Boolean.
putBooleanArray(String, boolean[]) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a Boolean array.
putDouble(String, double) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a Double.
putDoubleArray(String, double[]) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a Double array.
putLong(String, long) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a Long.
putLong(long) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
A specialized variant of BloomFilter.put(Object) that only supports long items.
putLongArray(String, long[]) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a Long array.
putMetadata(String, Metadata) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a Metadata.
putMetadataArray(String, Metadata[]) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a Metadata array.
putNull(String) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a null.
putString(String, String) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a String.
putString(String) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
A specialized variant of BloomFilter.put(Object) that only supports String items.
putStringArray(String, String[]) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Puts a String array.
pValue() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTestResult
 
pValue() - 类 中的方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
 
pValue() - 接口 中的方法org.apache.spark.mllib.stat.test.TestResult
The probability of obtaining a test statistic result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.
pValues() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
 
pValues() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
PYSPARK_EXECUTOR_MEMORY() - 类 中的静态方法org.apache.spark.internal.config.Python
 
Python - org.apache.spark.internal.config中的类
 
Python() - 类 的构造器org.apache.spark.internal.config.Python
 
PYTHON_DAEMON_MODULE() - 类 中的静态方法org.apache.spark.internal.config.Python
 
PYTHON_TASK_KILL_TIMEOUT() - 类 中的静态方法org.apache.spark.internal.config.Python
 
PYTHON_USE_DAEMON() - 类 中的静态方法org.apache.spark.internal.config.Python
 
PYTHON_WORKER_MODULE() - 类 中的静态方法org.apache.spark.internal.config.Python
 
PYTHON_WORKER_REUSE() - 类 中的静态方法org.apache.spark.internal.config.Python
 
PythonStreamingListener - org.apache.spark.streaming.api.java中的接口
 
pyUDT() - 类 中的方法org.apache.spark.mllib.linalg.VectorUDT
 

Q

Q() - 类 中的方法org.apache.spark.mllib.linalg.QRDecomposition
 
QRDecomposition<QType,RType> - org.apache.spark.mllib.linalg中的类
Represents QR factors.
QRDecomposition(QType, RType) - 类 的构造器org.apache.spark.mllib.linalg.QRDecomposition
 
quantileCalculationStrategy() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
QuantileDiscretizer - org.apache.spark.ml.feature中的类
QuantileDiscretizer takes a column with continuous features and outputs a column with binned categorical features.
QuantileDiscretizer(String) - 类 的构造器org.apache.spark.ml.feature.QuantileDiscretizer
 
QuantileDiscretizer() - 类 的构造器org.apache.spark.ml.feature.QuantileDiscretizer
 
QuantileDiscretizerBase - org.apache.spark.ml.feature中的接口
quantileProbabilities() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
quantileProbabilities() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
quantileProbabilities() - 接口 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionParams
Param for quantile probabilities array.
quantiles() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
quantilesCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
quantilesCol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
quantilesCol() - 接口 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionParams
Param for quantiles column name.
QuantileStrategy - org.apache.spark.mllib.tree.configuration中的类
Enum for selecting the quantile calculation strategy
QuantileStrategy() - 类 的构造器org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
quarter(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the quarter as an integer from a given date/timestamp/string.
query() - 接口 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectBase
 
query() - 类 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
query() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
query() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
query() - 类 中的方法org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 
queryExecution() - 类 中的方法org.apache.spark.sql.Dataset
 
queryExecution() - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
 
QueryExecutionListener - org.apache.spark.sql.util中的接口
The interface of query execution listener that can be used to analyze execution metrics.
queryName(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Specifies the name of the StreamingQuery that can be started with start().
queueStream(Queue<JavaRDD<T>>) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream from a queue of RDDs.
queueStream(Queue<JavaRDD<T>>, boolean) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream from a queue of RDDs.
queueStream(Queue<JavaRDD<T>>, boolean, JavaRDD<T>) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream from a queue of RDDs.
queueStream(Queue<RDD<T>>, boolean, ClassTag<T>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create an input stream from a queue of RDDs.
queueStream(Queue<RDD<T>>, boolean, RDD<T>, ClassTag<T>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create an input stream from a queue of RDDs.
quot(byte, byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
quot(Decimal, Decimal) - 类 中的方法org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
 
quot(Decimal) - 类 中的方法org.apache.spark.sql.types.Decimal
 
quot(int, int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
quot(long, long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
quot(short, short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
quoted() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
 
quoted() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
 
quoted() - 类 中的方法org.apache.spark.sql.connector.catalog.CatalogV2Implicits.NamespaceHelper
 
quoteIdentifier(String) - 类 中的方法org.apache.spark.sql.jdbc.AggregatedDialect
 
quoteIdentifier(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DB2Dialect
 
quoteIdentifier(String) - 类 中的静态方法org.apache.spark.sql.jdbc.DerbyDialect
 
quoteIdentifier(String) - 类 中的方法org.apache.spark.sql.jdbc.JdbcDialect
Quotes the identifier.
quoteIdentifier(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MsSqlServerDialect
 
quoteIdentifier(String) - 类 中的静态方法org.apache.spark.sql.jdbc.MySQLDialect
 
quoteIdentifier(String) - 类 中的静态方法org.apache.spark.sql.jdbc.NoopDialect
 
quoteIdentifier(String) - 类 中的静态方法org.apache.spark.sql.jdbc.OracleDialect
 
quoteIdentifier(String) - 类 中的静态方法org.apache.spark.sql.jdbc.PostgresDialect
 
quoteIdentifier(String) - 类 中的静态方法org.apache.spark.sql.jdbc.TeradataDialect
 

R

R - org.apache.spark.internal.config中的类
 
R() - 类 的构造器org.apache.spark.internal.config.R
 
R() - 类 中的方法org.apache.spark.mllib.linalg.QRDecomposition
 
r2() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
Returns R^2^, the coefficient of determination.
r2() - 类 中的方法org.apache.spark.mllib.evaluation.RegressionMetrics
Returns R^2^, the unadjusted coefficient of determination.
r2adj() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
Returns Adjusted R^2^, the adjusted coefficient of determination.
R_BACKEND_CONNECTION_TIMEOUT() - 类 中的静态方法org.apache.spark.internal.config.R
 
R_COMMAND() - 类 中的静态方法org.apache.spark.internal.config.R
 
R_HEARTBEAT_INTERVAL() - 类 中的静态方法org.apache.spark.internal.config.R
 
R_NUM_BACKEND_THREADS() - 类 中的静态方法org.apache.spark.internal.config.R
 
RACK_LOCAL() - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
radians(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts an angle measured in degrees to an approximately equivalent angle measured in radians.
radians(String) - 类 中的静态方法org.apache.spark.sql.functions
Converts an angle measured in degrees to an approximately equivalent angle measured in radians.
rand(int, int, Random) - 类 中的静态方法org.apache.spark.ml.linalg.DenseMatrix
Generate a DenseMatrix consisting of i.i.d.
rand(int, int, Random) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Generate a DenseMatrix consisting of i.i.d.
rand(int, int, Random) - 类 中的静态方法org.apache.spark.mllib.linalg.DenseMatrix
Generate a DenseMatrix consisting of i.i.d.
rand(int, int, Random) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Generate a DenseMatrix consisting of i.i.d.
rand(long) - 类 中的静态方法org.apache.spark.sql.functions
Generate a random column with independent and identically distributed (i.i.d.) samples from U[0.0, 1.0].
rand() - 类 中的静态方法org.apache.spark.sql.functions
Generate a random column with independent and identically distributed (i.i.d.) samples from U[0.0, 1.0].
randn(int, int, Random) - 类 中的静态方法org.apache.spark.ml.linalg.DenseMatrix
Generate a DenseMatrix consisting of i.i.d.
randn(int, int, Random) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Generate a DenseMatrix consisting of i.i.d.
randn(int, int, Random) - 类 中的静态方法org.apache.spark.mllib.linalg.DenseMatrix
Generate a DenseMatrix consisting of i.i.d.
randn(int, int, Random) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Generate a DenseMatrix consisting of i.i.d.
randn(long) - 类 中的静态方法org.apache.spark.sql.functions
Generate a column with independent and identically distributed (i.i.d.) samples from the standard normal distribution.
randn() - 类 中的静态方法org.apache.spark.sql.functions
Generate a column with independent and identically distributed (i.i.d.) samples from the standard normal distribution.
random() - 类 中的方法org.apache.spark.ml.image.SamplePathFilter
 
RANDOM() - 类 中的静态方法org.apache.spark.mllib.clustering.KMeans
 
random() - 类 中的静态方法org.apache.spark.util.Utils
 
RandomBlockReplicationPolicy - org.apache.spark.storage中的类
 
RandomBlockReplicationPolicy() - 类 的构造器org.apache.spark.storage.RandomBlockReplicationPolicy
 
RandomDataGenerator<T> - org.apache.spark.mllib.random中的接口
:: DeveloperApi :: Trait for random data generators that generate i.i.d. data.
RandomForest - org.apache.spark.ml.tree.impl中的类
ALGORITHM This is a sketch of the algorithm to help new developers.
RandomForest() - 类 的构造器org.apache.spark.ml.tree.impl.RandomForest
 
RandomForest - org.apache.spark.mllib.tree中的类
A class that implements a Random Forest learning algorithm for classification and regression.
RandomForest(Strategy, int, String, int) - 类 的构造器org.apache.spark.mllib.tree.RandomForest
 
RandomForestClassificationModel - org.apache.spark.ml.classification中的类
Random Forest model for classification.
RandomForestClassifier - org.apache.spark.ml.classification中的类
Random Forest learning algorithm for classification.
RandomForestClassifier(String) - 类 的构造器org.apache.spark.ml.classification.RandomForestClassifier
 
RandomForestClassifier() - 类 的构造器org.apache.spark.ml.classification.RandomForestClassifier
 
RandomForestClassifierParams - org.apache.spark.ml.tree中的接口
 
RandomForestModel - org.apache.spark.mllib.tree.model中的类
Represents a random forest model.
RandomForestModel(Enumeration.Value, DecisionTreeModel[]) - 类 的构造器org.apache.spark.mllib.tree.model.RandomForestModel
 
RandomForestParams - org.apache.spark.ml.tree中的接口
Parameters for Random Forest algorithms.
RandomForestRegressionModel - org.apache.spark.ml.regression中的类
Random Forest model for regression.
RandomForestRegressor - org.apache.spark.ml.regression中的类
Random Forest learning algorithm for regression.
RandomForestRegressor(String) - 类 的构造器org.apache.spark.ml.regression.RandomForestRegressor
 
RandomForestRegressor() - 类 的构造器org.apache.spark.ml.regression.RandomForestRegressor
 
RandomForestRegressorParams - org.apache.spark.ml.tree中的接口
 
randomize(TraversableOnce<T>, ClassTag<T>) - 类 中的静态方法org.apache.spark.util.Utils
Shuffle the elements of a collection into a random order, returning the result in a new collection.
randomizeInPlace(Object, Random) - 类 中的静态方法org.apache.spark.util.Utils
Shuffle the elements of an array into a random order, modifying the original array.
randomJavaRDD(JavaSparkContext, RandomDataGenerator<T>, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
:: DeveloperApi :: Generates an RDD comprised of i.i.d.
randomJavaRDD(JavaSparkContext, RandomDataGenerator<T>, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
:: DeveloperApi :: RandomRDDs.randomJavaRDD with the default seed.
randomJavaRDD(JavaSparkContext, RandomDataGenerator<T>, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
:: DeveloperApi :: RandomRDDs.randomJavaRDD with the default seed & numPartitions
randomJavaVectorRDD(JavaSparkContext, RandomDataGenerator<Object>, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
:: DeveloperApi :: Java-friendly version of RandomRDDs.randomVectorRDD.
randomJavaVectorRDD(JavaSparkContext, RandomDataGenerator<Object>, long, int, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
:: DeveloperApi :: RandomRDDs.randomJavaVectorRDD with the default seed.
randomJavaVectorRDD(JavaSparkContext, RandomDataGenerator<Object>, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
:: DeveloperApi :: RandomRDDs.randomJavaVectorRDD with the default number of partitions and the default seed.
randomRDD(SparkContext, RandomDataGenerator<T>, long, int, long, ClassTag<T>) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
:: DeveloperApi :: Generates an RDD comprised of i.i.d.
RandomRDDs - org.apache.spark.mllib.random中的类
Generator methods for creating RDDs comprised of i.i.d.
RandomRDDs() - 类 的构造器org.apache.spark.mllib.random.RandomRDDs
 
RandomSampler<T,U> - org.apache.spark.util.random中的接口
:: DeveloperApi :: A pseudorandom sampler.
randomSplit(double[]) - 类 中的方法org.apache.spark.api.java.JavaRDD
Randomly splits this RDD with the provided weights.
randomSplit(double[], long) - 类 中的方法org.apache.spark.api.java.JavaRDD
Randomly splits this RDD with the provided weights.
randomSplit(double[], long) - 类 中的方法org.apache.spark.rdd.RDD
Randomly splits this RDD with the provided weights.
randomSplit(double[], long) - 类 中的方法org.apache.spark.sql.Dataset
Randomly splits this Dataset with the provided weights.
randomSplit(double[]) - 类 中的方法org.apache.spark.sql.Dataset
Randomly splits this Dataset with the provided weights.
randomSplitAsList(double[], long) - 类 中的方法org.apache.spark.sql.Dataset
Returns a Java list that contains randomly split Dataset with the provided weights.
randomVectorRDD(SparkContext, RandomDataGenerator<Object>, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
:: DeveloperApi :: Generates an RDD[Vector] with vectors containing i.i.d.
RandomVertexCut$() - 类 的构造器org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
 
range() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
range(long, long, long, int) - 类 中的方法org.apache.spark.SparkContext
Creates a new RDD[Long] containing elements from start to end(exclusive), increased by step every element.
range(long) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a Dataset with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.
range(long, long) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.
range(long, long, long) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.
range(long, long, long, int) - 类 中的方法org.apache.spark.sql.SparkSession
Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value, with partition number specified.
range(long) - 类 中的方法org.apache.spark.sql.SQLContext
Creates a DataFrame with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.
range(long, long) - 类 中的方法org.apache.spark.sql.SQLContext
Creates a DataFrame with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.
range(long, long, long) - 类 中的方法org.apache.spark.sql.SQLContext
Creates a DataFrame with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.
range(long, long, long, int) - 类 中的方法org.apache.spark.sql.SQLContext
Creates a DataFrame with a single LongType column named id, containing elements in an range from start to end (exclusive) with an step value, with partition number specified.
rangeBetween(long, long) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the frame boundaries defined, from start (inclusive) to end (inclusive).
rangeBetween(long, long) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the frame boundaries, from start (inclusive) to end (inclusive).
RangeDependency<T> - org.apache.spark中的类
:: DeveloperApi :: Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.
RangeDependency(RDD<T>, int, int, int) - 类 的构造器org.apache.spark.RangeDependency
 
RangePartitioner<K,V> - org.apache.spark中的类
A Partitioner that partitions sortable records by range into roughly equal ranges.
RangePartitioner(int, RDD<? extends Product2<K, V>>, boolean, int, Ordering<K>, ClassTag<K>) - 类 的构造器org.apache.spark.RangePartitioner
 
RangePartitioner(int, RDD<? extends Product2<K, V>>, boolean, Ordering<K>, ClassTag<K>) - 类 的构造器org.apache.spark.RangePartitioner
 
rank() - 类 中的方法org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
rank() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
rank() - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
rank() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Param for rank of the matrix factorization (positive).
rank() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
rank() - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
 
rank() - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns the rank of rows within a window partition.
RankingEvaluator - org.apache.spark.ml.evaluation中的类
:: Experimental :: Evaluator for ranking, which expects two input columns: prediction and label.
RankingEvaluator(String) - 类 的构造器org.apache.spark.ml.evaluation.RankingEvaluator
 
RankingEvaluator() - 类 的构造器org.apache.spark.ml.evaluation.RankingEvaluator
 
RankingMetrics<T> - org.apache.spark.mllib.evaluation中的类
Evaluator for ranking algorithms.
RankingMetrics(RDD<Tuple2<Object, Object>>, ClassTag<T>) - 类 的构造器org.apache.spark.mllib.evaluation.RankingMetrics
 
RateEstimator - org.apache.spark.streaming.scheduler.rate中的接口
A component that estimates the rate at which an InputDStream should ingest records, based on updates at every batch completion.
Rating(ID, ID, float) - 类 的构造器org.apache.spark.ml.recommendation.ALS.Rating
 
rating() - 类 中的方法org.apache.spark.ml.recommendation.ALS.Rating
 
Rating - org.apache.spark.mllib.recommendation中的类
A more compact class to represent a rating than Tuple3[Int, Int, Double].
Rating(int, int, double) - 类 的构造器org.apache.spark.mllib.recommendation.Rating
 
rating() - 类 中的方法org.apache.spark.mllib.recommendation.Rating
 
Rating$() - 类 的构造器org.apache.spark.ml.recommendation.ALS.Rating$
 
RatingBlock$() - 类 的构造器org.apache.spark.ml.recommendation.ALS.RatingBlock$
 
ratingCol() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
ratingCol() - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Param for the column name for ratings.
ratioParam() - 类 中的静态方法org.apache.spark.ml.image.SamplePathFilter
 
raw2ProbabilityInPlace(Vector) - 接口 中的方法org.apache.spark.ml.ann.TopologyModel
Probability of the model.
rawCount() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
rawPredictionCol() - 类 中的方法org.apache.spark.ml.classification.ClassificationModel
 
rawPredictionCol() - 类 中的方法org.apache.spark.ml.classification.Classifier
 
rawPredictionCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
rawPredictionCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
rawPredictionCol() - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
rawPredictionCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasRawPredictionCol
Param for raw prediction (a.k.a. confidence) column name.
rawSocketStream(String, int, StorageLevel) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream from network source hostname:port, where data is received as serialized blocks (serialized using the Spark's serializer) that can be directly pushed into the block manager without deserializing them.
rawSocketStream(String, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream from network source hostname:port, where data is received as serialized blocks (serialized using the Spark's serializer) that can be directly pushed into the block manager without deserializing them.
rawSocketStream(String, int, StorageLevel, ClassTag<T>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create an input stream from network source hostname:port, where data is received as serialized blocks (serialized using the Spark's serializer) that can be directly pushed into the block manager without deserializing them.
RawTextHelper - org.apache.spark.streaming.util中的类
 
RawTextHelper() - 类 的构造器org.apache.spark.streaming.util.RawTextHelper
 
RawTextSender - org.apache.spark.streaming.util中的类
A helper program that sends blocks of Kryo-serialized text strings out on a socket at a specified rate.
RawTextSender() - 类 的构造器org.apache.spark.streaming.util.RawTextSender
 
RBackendAuthHandler - org.apache.spark.api.r中的类
Authentication handler for connections from the R process.
RBackendAuthHandler(String) - 类 的构造器org.apache.spark.api.r.RBackendAuthHandler
 
rdd() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
 
rdd() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
 
rdd() - 类 中的方法org.apache.spark.api.java.JavaRDD
 
rdd() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
 
RDD() - 类 中的静态方法org.apache.spark.api.r.RRunnerModes
 
rdd() - 类 中的方法org.apache.spark.Dependency
 
rdd() - 类 中的方法org.apache.spark.NarrowDependency
 
RDD<T> - org.apache.spark.rdd中的类
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.
RDD(SparkContext, Seq<Dependency<?>>, ClassTag<T>) - 类 的构造器org.apache.spark.rdd.RDD
 
RDD(RDD<?>, ClassTag<T>) - 类 的构造器org.apache.spark.rdd.RDD
Construct an RDD with just a one-to-one dependency on one parent
rdd() - 类 中的方法org.apache.spark.ShuffleDependency
 
rdd() - 类 中的方法org.apache.spark.sql.Dataset
 
RDD() - 类 中的静态方法org.apache.spark.storage.BlockId
 
RDD_NAME() - 类 中的静态方法org.apache.spark.ui.storage.ToolTips
 
RDDBarrier<T> - org.apache.spark.rdd中的类
:: Experimental :: Wraps an RDD in a barrier stage, which forces Spark to launch tasks of this stage together.
RDDBlockId - org.apache.spark.storage中的类
 
RDDBlockId(int, int) - 类 的构造器org.apache.spark.storage.RDDBlockId
 
rddBlocks() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
rddBlocks() - 类 中的方法org.apache.spark.status.LiveExecutor
 
rddCleaned(int) - 接口 中的方法org.apache.spark.CleanerListener
 
RDDDataDistribution - org.apache.spark.status.api.v1中的类
 
RDDFunctions<T> - org.apache.spark.mllib.rdd中的类
:: DeveloperApi :: Machine learning specific RDD functions.
RDDFunctions(RDD<T>, ClassTag<T>) - 类 的构造器org.apache.spark.mllib.rdd.RDDFunctions
 
rddId() - 类 中的方法org.apache.spark.CleanCheckpoint
 
rddId() - 类 中的方法org.apache.spark.CleanRDD
 
rddId() - 类 中的方法org.apache.spark.scheduler.SparkListenerUnpersistRDD
 
rddId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RemoveRdd
 
rddId() - 类 中的方法org.apache.spark.storage.RDDBlockId
 
rddIds() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
RDDInfo - org.apache.spark.storage中的类
 
RDDInfo(int, String, int, StorageLevel, boolean, Seq<Object>, String, Option<org.apache.spark.rdd.RDDOperationScope>) - 类 的构造器org.apache.spark.storage.RDDInfo
 
rddInfoFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
rddInfos() - 类 中的方法org.apache.spark.scheduler.StageInfo
 
rddInfoToJson(RDDInfo) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
RDDPartitionInfo - org.apache.spark.status.api.v1中的类
 
RDDPartitionSeq - org.apache.spark.status中的类
A custom sequence of partitions based on a mutable linked list.
RDDPartitionSeq() - 类 的构造器org.apache.spark.status.RDDPartitionSeq
 
rdds() - 类 中的方法org.apache.spark.rdd.CoGroupedRDD
 
rdds() - 类 中的方法org.apache.spark.rdd.UnionRDD
 
RDDStorageInfo - org.apache.spark.status.api.v1中的类
 
rddToAsyncRDDActions(RDD<T>, ClassTag<T>) - 类 中的静态方法org.apache.spark.rdd.RDD
 
rddToDatasetHolder(RDD<T>, Encoder<T>) - 类 中的方法org.apache.spark.sql.SQLImplicits
Creates a Dataset from an RDD.
rddToOrderedRDDFunctions(RDD<Tuple2<K, V>>, Ordering<K>, ClassTag<K>, ClassTag<V>) - 类 中的静态方法org.apache.spark.rdd.RDD
 
rddToPairRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - 类 中的静态方法org.apache.spark.rdd.RDD
 
rddToSequenceFileRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, <any>, <any>) - 类 中的静态方法org.apache.spark.rdd.RDD
 
read() - 类 中的方法org.apache.spark.io.NioBufferedFileInputStream
 
read(byte[], int, int) - 类 中的方法org.apache.spark.io.NioBufferedFileInputStream
 
read() - 类 中的方法org.apache.spark.io.ReadAheadInputStream
 
read(byte[], int, int) - 类 中的方法org.apache.spark.io.ReadAheadInputStream
 
read() - 类 中的静态方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
read() - 类 中的静态方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
read() - 类 中的静态方法org.apache.spark.ml.classification.GBTClassificationModel
 
read() - 类 中的静态方法org.apache.spark.ml.classification.GBTClassifier
 
read() - 类 中的静态方法org.apache.spark.ml.classification.LinearSVC
 
read() - 类 中的静态方法org.apache.spark.ml.classification.LinearSVCModel
 
read() - 类 中的静态方法org.apache.spark.ml.classification.LogisticRegression
 
read() - 类 中的静态方法org.apache.spark.ml.classification.LogisticRegressionModel
 
read() - 类 中的静态方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
read() - 类 中的静态方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
read() - 类 中的静态方法org.apache.spark.ml.classification.NaiveBayes
 
read() - 类 中的静态方法org.apache.spark.ml.classification.NaiveBayesModel
 
read() - 类 中的静态方法org.apache.spark.ml.classification.OneVsRest
 
read() - 类 中的静态方法org.apache.spark.ml.classification.OneVsRestModel
 
read() - 类 中的静态方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
read() - 类 中的静态方法org.apache.spark.ml.classification.RandomForestClassifier
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.BisectingKMeans
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.DistributedLDAModel
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.GaussianMixture
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.KMeans
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.KMeansModel
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.LDA
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.LocalLDAModel
 
read() - 类 中的静态方法org.apache.spark.ml.clustering.PowerIterationClustering
 
read() - 类 中的静态方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
read() - 类 中的静态方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
read() - 类 中的静态方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
read() - 类 中的静态方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
read() - 类 中的静态方法org.apache.spark.ml.evaluation.RankingEvaluator
 
read() - 类 中的静态方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
read() - 类 中的静态方法org.apache.spark.ml.feature.Binarizer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
read() - 类 中的静态方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.Bucketizer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.ChiSqSelector
 
read() - 类 中的静态方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.ColumnPruner
 
read() - 类 中的静态方法org.apache.spark.ml.feature.CountVectorizer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.CountVectorizerModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.DCT
 
read() - 类 中的静态方法org.apache.spark.ml.feature.ElementwiseProduct
 
read() - 类 中的静态方法org.apache.spark.ml.feature.FeatureHasher
 
read() - 类 中的静态方法org.apache.spark.ml.feature.HashingTF
 
read() - 类 中的静态方法org.apache.spark.ml.feature.IDF
 
read() - 类 中的静态方法org.apache.spark.ml.feature.IDFModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.Imputer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.ImputerModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.IndexToString
 
read() - 类 中的静态方法org.apache.spark.ml.feature.Interaction
 
read() - 类 中的静态方法org.apache.spark.ml.feature.MaxAbsScaler
 
read() - 类 中的静态方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.MinHashLSH
 
read() - 类 中的静态方法org.apache.spark.ml.feature.MinHashLSHModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.MinMaxScaler
 
read() - 类 中的静态方法org.apache.spark.ml.feature.MinMaxScalerModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.NGram
 
read() - 类 中的静态方法org.apache.spark.ml.feature.Normalizer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.OneHotEncoder
 
read() - 类 中的静态方法org.apache.spark.ml.feature.OneHotEncoderModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.PCA
 
read() - 类 中的静态方法org.apache.spark.ml.feature.PCAModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.PolynomialExpansion
 
read() - 类 中的静态方法org.apache.spark.ml.feature.QuantileDiscretizer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.RegexTokenizer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.RFormula
 
read() - 类 中的静态方法org.apache.spark.ml.feature.RFormulaModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.RobustScaler
 
read() - 类 中的静态方法org.apache.spark.ml.feature.RobustScalerModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.SQLTransformer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.StandardScaler
 
read() - 类 中的静态方法org.apache.spark.ml.feature.StandardScalerModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.StopWordsRemover
 
read() - 类 中的静态方法org.apache.spark.ml.feature.StringIndexer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.StringIndexerModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.Tokenizer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.VectorAssembler
 
read() - 类 中的静态方法org.apache.spark.ml.feature.VectorAttributeRewriter
 
read() - 类 中的静态方法org.apache.spark.ml.feature.VectorIndexer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.VectorIndexerModel
 
read() - 类 中的静态方法org.apache.spark.ml.feature.VectorSizeHint
 
read() - 类 中的静态方法org.apache.spark.ml.feature.VectorSlicer
 
read() - 类 中的静态方法org.apache.spark.ml.feature.Word2Vec
 
read() - 类 中的静态方法org.apache.spark.ml.feature.Word2VecModel
 
read() - 类 中的静态方法org.apache.spark.ml.fpm.FPGrowth
 
read() - 类 中的静态方法org.apache.spark.ml.fpm.FPGrowthModel
 
read() - 类 中的静态方法org.apache.spark.ml.Pipeline
 
read() - 类 中的静态方法org.apache.spark.ml.PipelineModel
 
read() - 类 中的静态方法org.apache.spark.ml.recommendation.ALS
 
read() - 类 中的静态方法org.apache.spark.ml.recommendation.ALSModel
 
read() - 类 中的静态方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
read() - 类 中的静态方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
read() - 类 中的静态方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
read() - 类 中的静态方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
read() - 类 中的静态方法org.apache.spark.ml.regression.GBTRegressionModel
 
read() - 类 中的静态方法org.apache.spark.ml.regression.GBTRegressor
 
read() - 类 中的静态方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
read() - 类 中的静态方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
read() - 类 中的静态方法org.apache.spark.ml.regression.IsotonicRegression
 
read() - 类 中的静态方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
read() - 类 中的静态方法org.apache.spark.ml.regression.LinearRegression
 
read() - 类 中的静态方法org.apache.spark.ml.regression.LinearRegressionModel
 
read() - 类 中的静态方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
read() - 类 中的静态方法org.apache.spark.ml.regression.RandomForestRegressor
 
read() - 类 中的静态方法org.apache.spark.ml.tuning.CrossValidator
 
read() - 类 中的静态方法org.apache.spark.ml.tuning.CrossValidatorModel
 
read() - 类 中的静态方法org.apache.spark.ml.tuning.TrainValidationSplit
 
read() - 类 中的静态方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
read() - 接口 中的方法org.apache.spark.ml.util.DefaultParamsReadable
 
read() - 接口 中的方法org.apache.spark.ml.util.MLReadable
Returns an MLReader instance for this class.
read(ByteBuffer) - 类 中的方法org.apache.spark.security.CryptoStreamUtils.ErrorHandlingReadableChannel
 
read(Kryo, Input, Class<Iterable<?>>) - 类 中的方法org.apache.spark.serializer.JavaIterableWrapperSerializer
 
read() - 类 中的方法org.apache.spark.sql.SparkSession
Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.
read() - 类 中的方法org.apache.spark.sql.SQLContext
Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.
read() - 类 中的方法org.apache.spark.storage.BufferReleasingInputStream
 
read(byte[]) - 类 中的方法org.apache.spark.storage.BufferReleasingInputStream
 
read(byte[], int, int) - 类 中的方法org.apache.spark.storage.BufferReleasingInputStream
 
read(String) - 类 中的静态方法org.apache.spark.streaming.CheckpointReader
Read checkpoint files present in the given checkpoint directory.
read(String, SparkConf, Configuration, boolean) - 类 中的静态方法org.apache.spark.streaming.CheckpointReader
Read checkpoint files present in the given checkpoint directory.
read(WriteAheadLogRecordHandle) - 类 中的方法org.apache.spark.streaming.util.WriteAheadLog
Read a written record based on the given record handle.
ReadableChannelFileRegion - org.apache.spark.storage中的类
 
ReadableChannelFileRegion(ReadableByteChannel, long) - 类 的构造器org.apache.spark.storage.ReadableChannelFileRegion
 
ReadAheadInputStream - org.apache.spark.io中的类
InputStream implementation which asynchronously reads ahead from the underlying input stream when specified amount of data has been read from the current buffer.
ReadAheadInputStream(InputStream, int) - 类 的构造器org.apache.spark.io.ReadAheadInputStream
Creates a ReadAheadInputStream with the specified buffer size and read-ahead threshold
readAll() - 类 中的方法org.apache.spark.streaming.util.WriteAheadLog
Read and return an iterator of all the records that have been written but not yet cleaned up.
readArray(DataInputStream, JVMObjectTracker) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readArrowStreamFromFile(SparkSession, String) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
R callable function to read a file in Arrow stream format and create an RDD using each serialized ArrowRecordBatch as a partition.
readBoolean(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readBooleanArr(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readBytes(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readBytes() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
 
readBytesArr(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readDate(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readDouble(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readDoubleArr(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
reader() - 类 中的方法org.apache.spark.ml.LoadInstanceEnd
 
reader() - 类 中的方法org.apache.spark.ml.LoadInstanceStart
 
readExternal(ObjectInput) - 类 中的方法org.apache.spark.serializer.JavaSerializer
 
readExternal(ObjectInput) - 类 中的方法org.apache.spark.storage.BlockManagerId
 
readExternal(ObjectInput) - 类 中的方法org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
readExternal(ObjectInput) - 类 中的方法org.apache.spark.storage.StorageLevel
 
readFrom(ConfigReader) - 类 中的方法org.apache.spark.internal.config.ConfigEntryWithDefault
 
readFrom(ConfigReader) - 类 中的方法org.apache.spark.internal.config.ConfigEntryWithDefaultFunction
 
readFrom(ConfigReader) - 类 中的方法org.apache.spark.internal.config.ConfigEntryWithDefaultString
 
readFrom(InputStream) - 类 中的静态方法org.apache.spark.util.sketch.BloomFilter
Reads in a BloomFilter from an input stream.
readFrom(InputStream) - 类 中的静态方法org.apache.spark.util.sketch.CountMinSketch
Reads in a CountMinSketch from an input stream.
readFrom(byte[]) - 类 中的静态方法org.apache.spark.util.sketch.CountMinSketch
Reads in a CountMinSketch from a byte array.
readInt(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readIntArr(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readKey(ClassTag<T>) - 类 中的方法org.apache.spark.serializer.DeserializationStream
Reads the object representing the key of a key-value pair.
readList(DataInputStream, JVMObjectTracker) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readMap(DataInputStream, JVMObjectTracker) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readObject(DataInputStream, JVMObjectTracker) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readObject(ClassTag<T>) - 类 中的方法org.apache.spark.serializer.DeserializationStream
The most general-purpose method to read an object.
readObjectType(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readOrcSchemasInParallel(Seq<FileStatus>, Configuration, boolean) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileOperator
Reads ORC file schemas in multi-threaded manner, using Hive ORC library.
readRecords() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
 
readSchema() - 接口 中的方法org.apache.spark.sql.connector.read.Scan
Returns the actual schema of this data source scan, which may be different from the physical schema of the underlying storage, as column pruning or other optimizations may happen.
readSchema(Seq<String>, Option<Configuration>, boolean) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileOperator
 
readSqlObject(DataInputStream, char) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
readStream() - 类 中的方法org.apache.spark.sql.SparkSession
Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.
readStream() - 类 中的方法org.apache.spark.sql.SQLContext
Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.
readString(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readStringArr(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readStringBytes(DataInputStream, int) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readTime(DataInputStream) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readTypedObject(DataInputStream, char, JVMObjectTracker) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
readValue(ClassTag<T>) - 类 中的方法org.apache.spark.serializer.DeserializationStream
Reads the object representing the value of a key-value pair.
ready(Duration, CanAwait) - 类 中的方法org.apache.spark.ComplexFutureAction
 
ready(Duration, CanAwait) - 接口 中的方法org.apache.spark.FutureAction
Blocks until this action completes.
ready(Duration, CanAwait) - 类 中的方法org.apache.spark.SimpleFutureAction
 
REAPER_ITERATIONS() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
reason() - 类 中的方法org.apache.spark.ExecutorLostFailure
 
reason() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
 
reason() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
 
reason() - 类 中的方法org.apache.spark.scheduler.local.KillTask
 
reason() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorRemoved
 
reason() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskEnd
 
reason() - 类 中的方法org.apache.spark.TaskKilled
 
reason() - 异常错误 中的方法org.apache.spark.TaskKilledException
 
Recall - org.apache.spark.mllib.evaluation.binary中的类
Recall.
Recall() - 类 的构造器org.apache.spark.mllib.evaluation.binary.Recall
 
recall(double) - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns recall for a given label (category)
recall() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns document-based recall averaged by the number of documents
recall(double) - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns recall for a given label (category)
recallAt(int) - 类 中的方法org.apache.spark.mllib.evaluation.RankingMetrics
Compute the average recall of all the queries, truncated at ranking position k.
recallByLabel() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns recall for each label (category).
recallByThreshold() - 接口 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
Returns a dataframe with two fields (threshold, recall) curve.
recallByThreshold() - 类 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
 
recallByThreshold() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Returns the (threshold, recall) curve.
receive() - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
Process messages from RpcEndpointRef.send or RpcCallContext.reply.
receiveAndReply(RpcCallContext) - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
Process messages from RpcEndpointRef.ask.
ReceivedBlock - org.apache.spark.streaming.receiver中的接口
Trait representing a received block
ReceivedBlockHandler - org.apache.spark.streaming.receiver中的接口
Trait that represents a class that handles the storage of blocks received by receiver
ReceivedBlockStoreResult - org.apache.spark.streaming.receiver中的接口
Trait that represents the metadata related to storage of blocks
ReceivedBlockTrackerLogEvent - org.apache.spark.streaming.scheduler中的接口
Trait representing any event in the ReceivedBlockTracker that updates its state.
Receiver<T> - org.apache.spark.streaming.receiver中的类
:: DeveloperApi :: Abstract class of a receiver that can be run on worker nodes to receive external data.
Receiver(StorageLevel) - 类 的构造器org.apache.spark.streaming.receiver.Receiver
 
RECEIVER_WAL_CLASS_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
RECEIVER_WAL_CLOSE_AFTER_WRITE_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
RECEIVER_WAL_ENABLE_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
RECEIVER_WAL_MAX_FAILURES_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
RECEIVER_WAL_ROLLING_INTERVAL_CONF_KEY() - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
ReceiverInfo - org.apache.spark.status.api.v1.streaming中的类
 
ReceiverInfo - org.apache.spark.streaming.scheduler中的类
:: DeveloperApi :: Class having information about a receiver
ReceiverInfo(int, String, boolean, String, String, String, String, long) - 类 的构造器org.apache.spark.streaming.scheduler.ReceiverInfo
 
receiverInfo() - 类 中的方法org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
 
receiverInfo() - 类 中的方法org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
 
receiverInfo() - 类 中的方法org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
 
receiverInputDStream() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
 
receiverInputDStream() - 类 中的方法org.apache.spark.streaming.api.java.JavaReceiverInputDStream
 
ReceiverInputDStream<T> - org.apache.spark.streaming.dstream中的类
Abstract class for defining any InputDStream that has to start a receiver on worker nodes to receive external data.
ReceiverInputDStream(StreamingContext, ClassTag<T>) - 类 的构造器org.apache.spark.streaming.dstream.ReceiverInputDStream
 
ReceiverMessage - org.apache.spark.streaming.receiver中的接口
Messages sent to the Receiver.
ReceiverState - org.apache.spark.streaming.scheduler中的类
Enumeration to identify current state of a Receiver
ReceiverState() - 类 的构造器org.apache.spark.streaming.scheduler.ReceiverState
 
receiverStream(Receiver<T>) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream with any arbitrary user implemented receiver.
receiverStream(Receiver<T>, ClassTag<T>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create an input stream with any arbitrary user implemented receiver.
ReceiverTrackerLocalMessage - org.apache.spark.streaming.scheduler中的接口
Messages used by the driver and ReceiverTrackerEndpoint to communicate locally.
ReceiverTrackerMessage - org.apache.spark.streaming.scheduler中的接口
Messages used by the NetworkReceiver and the ReceiverTracker to communicate with each other.
recentProgress() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Returns an array of the most recent StreamingQueryProgress updates for this query.
recommendForAllItems(int) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
Returns top numUsers users recommended for each item, for all items.
recommendForAllUsers(int) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
Returns top numItems items recommended for each user, for all users.
recommendForItemSubset(Dataset<?>, int) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
Returns top numUsers users recommended for each item id in the input data set.
recommendForUserSubset(Dataset<?>, int) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
Returns top numItems items recommended for each user id in the input data set.
recommendProducts(int, int) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
Recommends products to a user.
recommendProductsForUsers(int) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
Recommends top products for all users.
recommendUsers(int, int) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
Recommends users to a product.
recommendUsersForProducts(int) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
Recommends top users for all products.
recordReader(InputStream, Configuration) - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
recordReaderClass() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
RECORDS_BETWEEN_BYTES_READ_METRIC_UPDATES() - 类 中的静态方法org.apache.spark.rdd.HadoopRDD
Update the input bytes read metric each time this number of records has been read
RECORDS_READ() - 类 中的方法org.apache.spark.InternalAccumulator.input$
 
RECORDS_READ() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleRead$
 
RECORDS_WRITTEN() - 类 中的方法org.apache.spark.InternalAccumulator.output$
 
RECORDS_WRITTEN() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleWrite$
 
recordsRead() - 类 中的方法org.apache.spark.status.api.v1.InputMetricDistributions
 
recordsRead() - 类 中的方法org.apache.spark.status.api.v1.InputMetrics
 
recordsRead() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetrics
 
recordsWritten() - 类 中的方法org.apache.spark.status.api.v1.OutputMetricDistributions
 
recordsWritten() - 类 中的方法org.apache.spark.status.api.v1.OutputMetrics
 
recordsWritten() - 类 中的方法org.apache.spark.status.api.v1.ShuffleWriteMetrics
 
recordWriter(OutputStream, Configuration) - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
recordWriterClass() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
recoverPartitions(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Recovers all the partitions in the directory of a table and update the catalog.
RECOVERY_DIRECTORY() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
RECOVERY_MODE() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
RECOVERY_MODE_FACTORY() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
RecursiveFlag - org.apache.spark.ml.image中的类
 
RecursiveFlag() - 类 的构造器org.apache.spark.ml.image.RecursiveFlag
 
recursiveList(File) - 类 中的静态方法org.apache.spark.TestUtils
Lists files recursively.
redact(SparkConf, Seq<Tuple2<String, String>>) - 类 中的静态方法org.apache.spark.util.Utils
Redact the sensitive values in the given map.
redact(Option<Regex>, Seq<Tuple2<K, V>>) - 类 中的静态方法org.apache.spark.util.Utils
Redact the sensitive values in the given map.
redact(Option<Regex>, String) - 类 中的静态方法org.apache.spark.util.Utils
Redact the sensitive information in the given string.
redact(Map<String, String>) - 类 中的静态方法org.apache.spark.util.Utils
Looks up the redaction regex from within the key value pairs and uses it to redact the rest of the key value pairs.
redactCommandLineArgs(SparkConf, Seq<String>) - 类 中的静态方法org.apache.spark.util.Utils
 
REDIRECT_CONNECTOR_NAME() - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
redirectableStream() - 类 中的方法org.apache.spark.storage.memory.SerializedValuesHolder
 
redirectError() - 类 中的方法org.apache.spark.launcher.SparkLauncher
Specifies that stderr in spark-submit should be redirected to stdout.
redirectError(ProcessBuilder.Redirect) - 类 中的方法org.apache.spark.launcher.SparkLauncher
Redirects error output to the specified Redirect.
redirectError(File) - 类 中的方法org.apache.spark.launcher.SparkLauncher
Redirects error output to the specified File.
redirectOutput(ProcessBuilder.Redirect) - 类 中的方法org.apache.spark.launcher.SparkLauncher
Redirects standard output to the specified Redirect.
redirectOutput(File) - 类 中的方法org.apache.spark.launcher.SparkLauncher
Redirects error output to the specified File.
redirectToLog(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
Sets all output to be logged and redirected to a logger with the specified name.
reduce(Function2<T, T, T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Reduces the elements of this RDD using the specified commutative and associative binary operator.
reduce(OpenHashMap<String, Object>[], Row) - 类 中的方法org.apache.spark.ml.feature.StringIndexerAggregator
 
reduce(Function2<T, T, T>) - 类 中的方法org.apache.spark.rdd.RDD
Reduces the elements of this RDD using the specified commutative and associative binary operator.
reduce(Function2<T, T, T>) - 类 中的方法org.apache.spark.sql.Dataset
(Scala-specific) Reduces the elements of this Dataset using the specified binary function.
reduce(ReduceFunction<T>) - 类 中的方法org.apache.spark.sql.Dataset
(Java-specific) Reduces the elements of this Dataset using the specified binary function.
reduce(BUF, IN) - 类 中的方法org.apache.spark.sql.expressions.Aggregator
Combine two values to produce a new value.
reduce(Function2<T, T, T>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD has a single element generated by reducing each RDD of this DStream.
reduce(Function2<T, T, T>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD has a single element generated by reducing each RDD of this DStream.
reduceByKey(Partitioner, Function2<V, V, V>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative and commutative reduce function.
reduceByKey(Function2<V, V, V>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative and commutative reduce function.
reduceByKey(Function2<V, V, V>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative and commutative reduce function.
reduceByKey(Partitioner, Function2<V, V, V>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative and commutative reduce function.
reduceByKey(Function2<V, V, V>, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative and commutative reduce function.
reduceByKey(Function2<V, V, V>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative and commutative reduce function.
reduceByKey(Function2<V, V, V>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying reduceByKey to each RDD.
reduceByKey(Function2<V, V, V>, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying reduceByKey to each RDD.
reduceByKey(Function2<V, V, V>, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying reduceByKey to each RDD.
reduceByKey(Function2<V, V, V>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying reduceByKey to each RDD.
reduceByKey(Function2<V, V, V>, int) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying reduceByKey to each RDD.
reduceByKey(Function2<V, V, V>, Partitioner) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying reduceByKey to each RDD.
reduceByKeyAndWindow(Function2<V, V, V>, Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Create a new DStream by applying reduceByKey over a sliding window on this DStream.
reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying reduceByKey over a sliding window.
reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying reduceByKey over a sliding window.
reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying reduceByKey over a sliding window.
reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by reducing over a using incremental computation.
reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, int, Function<Tuple2<K, V>, Boolean>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying incremental reduceByKey over a sliding window.
reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, Partitioner, Function<Tuple2<K, V>, Boolean>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying incremental reduceByKey over a sliding window.
reduceByKeyAndWindow(Function2<V, V, V>, Duration) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying reduceByKey over a sliding window on this DStream.
reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying reduceByKey over a sliding window.
reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, int) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying reduceByKey over a sliding window.
reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, Partitioner) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying reduceByKey over a sliding window.
reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, int, Function1<Tuple2<K, V>, Object>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying incremental reduceByKey over a sliding window.
reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, Partitioner, Function1<Tuple2<K, V>, Object>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying incremental reduceByKey over a sliding window.
reduceByKeyLocally(Function2<V, V, V>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative and commutative reduce function, but return the result immediately to the master as a Map.
reduceByKeyLocally(Function2<V, V, V>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative and commutative reduce function, but return the results immediately to the master as a Map.
reduceByWindow(Function2<T, T, T>, Duration, Duration) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.
reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.
reduceByWindow(Function2<T, T, T>, Duration, Duration) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.
reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.
ReduceFunction<T> - org.apache.spark.api.java.function中的接口
Base interface for function used in Dataset's reduce.
reduceGroups(Function2<V, V, V>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Scala-specific) Reduces the elements of each group of data using the specified binary function.
reduceGroups(ReduceFunction<V>) - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
(Java-specific) Reduces the elements of each group of data using the specified binary function.
reduceId() - 类 中的方法org.apache.spark.FetchFailed
 
reduceId() - 类 中的方法org.apache.spark.storage.ShuffleBlockId
 
reduceId() - 类 中的方法org.apache.spark.storage.ShuffleDataBlockId
 
reduceId() - 类 中的方法org.apache.spark.storage.ShuffleIndexBlockId
 
Ref - org.apache.spark.sql.connector.expressions中的类
Convenience extractor for any NamedReference.
Ref() - 类 的构造器org.apache.spark.sql.connector.expressions.Ref
 
reference(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
references() - 接口 中的方法org.apache.spark.sql.connector.expressions.Transform
Returns all field references in the transform arguments.
references() - 类 中的方法org.apache.spark.sql.sources.AlwaysFalse
 
references() - 类 中的方法org.apache.spark.sql.sources.AlwaysTrue
 
references() - 类 中的方法org.apache.spark.sql.sources.And
 
references() - 类 中的方法org.apache.spark.sql.sources.EqualNullSafe
 
references() - 类 中的方法org.apache.spark.sql.sources.EqualTo
 
references() - 类 中的方法org.apache.spark.sql.sources.Filter
List of columns that are referenced by this filter.
references() - 类 中的方法org.apache.spark.sql.sources.GreaterThan
 
references() - 类 中的方法org.apache.spark.sql.sources.GreaterThanOrEqual
 
references() - 类 中的方法org.apache.spark.sql.sources.In
 
references() - 类 中的方法org.apache.spark.sql.sources.IsNotNull
 
references() - 类 中的方法org.apache.spark.sql.sources.IsNull
 
references() - 类 中的方法org.apache.spark.sql.sources.LessThan
 
references() - 类 中的方法org.apache.spark.sql.sources.LessThanOrEqual
 
references() - 类 中的方法org.apache.spark.sql.sources.Not
 
references() - 类 中的方法org.apache.spark.sql.sources.Or
 
references() - 类 中的方法org.apache.spark.sql.sources.StringContains
 
references() - 类 中的方法org.apache.spark.sql.sources.StringEndsWith
 
references() - 类 中的方法org.apache.spark.sql.sources.StringStartsWith
 
refreshByPath(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Invalidates and refreshes all the cached data (and the associated metadata) for any Dataset that contains the given data source path.
refreshTable(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Invalidates and refreshes all the cached data and metadata of the given table.
regex(Regex) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
regexFromString(String, String) - 类 中的静态方法org.apache.spark.internal.config.ConfigHelpers
 
regexp_extract(Column, String, int) - 类 中的静态方法org.apache.spark.sql.functions
Extract a specific group matched by a Java regex, from the specified string column.
regexp_replace(Column, String, String) - 类 中的静态方法org.apache.spark.sql.functions
Replace all substrings of the specified string value that match regexp with rep.
regexp_replace(Column, Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Replace all substrings of the specified string value that match regexp with rep.
RegexTokenizer - org.apache.spark.ml.feature中的类
A regex based tokenizer that extracts tokens either by using the provided regex pattern to split the text (default) or repeatedly matching the regex (if gaps is false).
RegexTokenizer(String) - 类 的构造器org.apache.spark.ml.feature.RegexTokenizer
 
RegexTokenizer() - 类 的构造器org.apache.spark.ml.feature.RegexTokenizer
 
register(SparkContext, Map<String, DoubleAccumulator>) - 类 中的静态方法org.apache.spark.metrics.source.DoubleAccumulatorSource
 
register(SparkContext, Map<String, LongAccumulator>) - 类 中的静态方法org.apache.spark.metrics.source.LongAccumulatorSource
 
register(String, RpcEndpoint) - 类 中的方法org.apache.spark.rpc.netty.SharedMessageLoop
 
register(AccumulatorV2<?, ?>) - 类 中的方法org.apache.spark.SparkContext
Register the given accumulator.
register(AccumulatorV2<?, ?>, String) - 类 中的方法org.apache.spark.SparkContext
Register the given accumulator with given name.
register(String, String) - 类 中的静态方法org.apache.spark.sql.types.UDTRegistration
Registers an UserDefinedType to an user class.
register(String, UserDefinedAggregateFunction) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a user-defined aggregate function (UDAF).
register(String, UserDefinedFunction) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a user-defined function (UDF), for a UDF that's already defined using the Dataset API (i.e. of type UserDefinedFunction).
register(String, Function0<RT>, TypeTags.TypeTag<RT>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 0 arguments as user-defined function (UDF).
register(String, Function1<A1, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 1 arguments as user-defined function (UDF).
register(String, Function2<A1, A2, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 2 arguments as user-defined function (UDF).
register(String, Function3<A1, A2, A3, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 3 arguments as user-defined function (UDF).
register(String, Function4<A1, A2, A3, A4, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 4 arguments as user-defined function (UDF).
register(String, Function5<A1, A2, A3, A4, A5, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 5 arguments as user-defined function (UDF).
register(String, Function6<A1, A2, A3, A4, A5, A6, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 6 arguments as user-defined function (UDF).
register(String, Function7<A1, A2, A3, A4, A5, A6, A7, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 7 arguments as user-defined function (UDF).
register(String, Function8<A1, A2, A3, A4, A5, A6, A7, A8, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 8 arguments as user-defined function (UDF).
register(String, Function9<A1, A2, A3, A4, A5, A6, A7, A8, A9, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 9 arguments as user-defined function (UDF).
register(String, Function10<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 10 arguments as user-defined function (UDF).
register(String, Function11<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 11 arguments as user-defined function (UDF).
register(String, Function12<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 12 arguments as user-defined function (UDF).
register(String, Function13<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 13 arguments as user-defined function (UDF).
register(String, Function14<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 14 arguments as user-defined function (UDF).
register(String, Function15<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 15 arguments as user-defined function (UDF).
register(String, Function16<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 16 arguments as user-defined function (UDF).
register(String, Function17<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 17 arguments as user-defined function (UDF).
register(String, Function18<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 18 arguments as user-defined function (UDF).
register(String, Function19<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 19 arguments as user-defined function (UDF).
register(String, Function20<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>, TypeTags.TypeTag<A20>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 20 arguments as user-defined function (UDF).
register(String, Function21<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>, TypeTags.TypeTag<A20>, TypeTags.TypeTag<A21>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 21 arguments as user-defined function (UDF).
register(String, Function22<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>, TypeTags.TypeTag<A20>, TypeTags.TypeTag<A21>, TypeTags.TypeTag<A22>) - 类 中的方法org.apache.spark.sql.UDFRegistration
Registers a deterministic Scala closure of 22 arguments as user-defined function (UDF).
register(String, UDF0<?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF0 instance as user-defined function (UDF).
register(String, UDF1<?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF1 instance as user-defined function (UDF).
register(String, UDF2<?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF2 instance as user-defined function (UDF).
register(String, UDF3<?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF3 instance as user-defined function (UDF).
register(String, UDF4<?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF4 instance as user-defined function (UDF).
register(String, UDF5<?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF5 instance as user-defined function (UDF).
register(String, UDF6<?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF6 instance as user-defined function (UDF).
register(String, UDF7<?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF7 instance as user-defined function (UDF).
register(String, UDF8<?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF8 instance as user-defined function (UDF).
register(String, UDF9<?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF9 instance as user-defined function (UDF).
register(String, UDF10<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF10 instance as user-defined function (UDF).
register(String, UDF11<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF11 instance as user-defined function (UDF).
register(String, UDF12<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF12 instance as user-defined function (UDF).
register(String, UDF13<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF13 instance as user-defined function (UDF).
register(String, UDF14<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF14 instance as user-defined function (UDF).
register(String, UDF15<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF15 instance as user-defined function (UDF).
register(String, UDF16<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF16 instance as user-defined function (UDF).
register(String, UDF17<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF17 instance as user-defined function (UDF).
register(String, UDF18<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF18 instance as user-defined function (UDF).
register(String, UDF19<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF19 instance as user-defined function (UDF).
register(String, UDF20<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF20 instance as user-defined function (UDF).
register(String, UDF21<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF21 instance as user-defined function (UDF).
register(String, UDF22<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的方法org.apache.spark.sql.UDFRegistration
Register a deterministic Java UDF22 instance as user-defined function (UDF).
register(QueryExecutionListener) - 类 中的方法org.apache.spark.sql.util.ExecutionListenerManager
Registers the specified QueryExecutionListener.
register(AccumulatorV2<?, ?>) - 类 中的静态方法org.apache.spark.util.AccumulatorContext
Registers an AccumulatorV2 created on the driver such that it can be used on the executors.
register(String, Function0<Object>) - 类 中的静态方法org.apache.spark.util.SignalUtils
Adds an action to be run when a given signal is received by this process.
registerAvroSchemas(Seq<Schema>) - 类 中的方法org.apache.spark.SparkConf
Use Kryo serialization and register the given set of Avro schemas so that the generic record serializer can decrease network IO
RegisterBlockManager(BlockManagerId, String[], long, long, org.apache.spark.rpc.RpcEndpointRef) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
 
RegisterBlockManager$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager$
 
registerClasses(Kryo) - 接口 中的方法org.apache.spark.serializer.KryoRegistrator
 
RegisterClusterManager(org.apache.spark.rpc.RpcEndpointRef) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
 
RegisterClusterManager$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager$
 
registerDialect(JdbcDialect) - 类 中的静态方法org.apache.spark.sql.jdbc.JdbcDialects
Register a dialect for use on all new matching jdbc org.apache.spark.sql.DataFrame.
RegisteredExecutor$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisteredExecutor$
 
RegisterExecutor(String, org.apache.spark.rpc.RpcEndpointRef, String, int, Map<String, String>, Map<String, String>, Map<String, ResourceInformation>) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
 
RegisterExecutor$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor$
 
RegisterExecutorFailed(String) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
 
RegisterExecutorFailed$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed$
 
registerKryoClasses(SparkConf) - 类 中的静态方法org.apache.spark.graphx.GraphXUtils
Registers classes that GraphX uses with Kryo.
registerKryoClasses(SparkContext) - 类 中的静态方法org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
This method registers the class SquaredEuclideanSilhouette.ClusterStats for kryo serialization.
registerKryoClasses(Class<?>[]) - 类 中的方法org.apache.spark.SparkConf
Use Kryo serialization and register the given set of classes with Kryo.
registerLogger(Logger) - 类 中的静态方法org.apache.spark.util.SignalUtils
Register a signal handler to log signals on UNIX-like systems.
registerShuffle(int) - 接口 中的方法org.apache.spark.shuffle.api.ShuffleDriverComponents
Called once per shuffle id when the shuffle id is first generated for a shuffle stage.
registerShutdownDeleteDir(File) - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
 
registerStream(DStream<BinarySample>) - 类 中的方法org.apache.spark.mllib.stat.test.StreamingTest
Register a DStream of values for significance testing.
registerStream(JavaDStream<BinarySample>) - 类 中的方法org.apache.spark.mllib.stat.test.StreamingTest
Register a JavaDStream of values for significance testing.
regParam() - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
regParam() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
regParam() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
regParam() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
regParam() - 接口 中的方法org.apache.spark.ml.optim.loss.DifferentiableRegularization
Magnitude of the regularization penalty.
regParam() - 接口 中的方法org.apache.spark.ml.param.shared.HasRegParam
Param for regularization parameter (&gt;= 0).
regParam() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
regParam() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
regParam() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
regParam() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
regParam() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
Regression() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.Algo
 
RegressionEvaluator - org.apache.spark.ml.evaluation中的类
Evaluator for regression, which expects two input columns: prediction and label.
RegressionEvaluator(String) - 类 的构造器org.apache.spark.ml.evaluation.RegressionEvaluator
 
RegressionEvaluator() - 类 的构造器org.apache.spark.ml.evaluation.RegressionEvaluator
 
RegressionMetrics - org.apache.spark.mllib.evaluation中的类
Evaluator for regression.
RegressionMetrics(RDD<? extends Product>, boolean) - 类 的构造器org.apache.spark.mllib.evaluation.RegressionMetrics
 
RegressionMetrics(RDD<? extends Product>) - 类 的构造器org.apache.spark.mllib.evaluation.RegressionMetrics
 
RegressionModel<FeaturesType,M extends RegressionModel<FeaturesType,M>> - org.apache.spark.ml.regression中的类
:: DeveloperApi :: Model produced by a Regressor.
RegressionModel() - 类 的构造器org.apache.spark.ml.regression.RegressionModel
 
RegressionModel - org.apache.spark.mllib.regression中的接口
 
Regressor<FeaturesType,Learner extends Regressor<FeaturesType,Learner,M>,M extends RegressionModel<FeaturesType,M>> - org.apache.spark.ml.regression中的类
Single-label regression
Regressor() - 类 的构造器org.apache.spark.ml.regression.Regressor
 
reindex() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
reindex() - 类 中的方法org.apache.spark.graphx.VertexRDD
Construct a new VertexRDD that is indexed by only the visible vertices.
RelationalGroupedDataset - org.apache.spark.sql中的类
A set of methods for aggregations on a DataFrame, created by groupBy, cube or rollup (and also pivot).
RelationalGroupedDataset.CubeType$ - org.apache.spark.sql中的类
To indicate it's the CUBE
RelationalGroupedDataset.GroupByType$ - org.apache.spark.sql中的类
To indicate it's the GroupBy
RelationalGroupedDataset.GroupType - org.apache.spark.sql中的接口
The Grouping Type
RelationalGroupedDataset.PivotType$ - org.apache.spark.sql中的类
 
RelationalGroupedDataset.RollupType$ - org.apache.spark.sql中的类
To indicate it's the ROLLUP
RelationConversions - org.apache.spark.sql.hive中的类
Relation conversion from metastore relations to data source relations for better performance - When writing to non-partitioned Hive-serde Parquet/Orc tables - When scanning Hive-serde Parquet/ORC tables This rule must be run before all other DDL post-hoc resolution rules, i.e.
RelationConversions(SQLConf, HiveSessionCatalog) - 类 的构造器org.apache.spark.sql.hive.RelationConversions
 
RelationProvider - org.apache.spark.sql.sources中的接口
Implemented by objects that produce relations for a specific kind of data source.
relativeDirection(long) - 类 中的方法org.apache.spark.graphx.Edge
Return the relative direction of the edge to the corresponding vertex.
relativeError() - 类 中的方法org.apache.spark.ml.feature.Imputer
 
relativeError() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
relativeError() - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
relativeError() - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
relativeError() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
relativeError() - 接口 中的方法org.apache.spark.ml.param.shared.HasRelativeError
Param for the relative target precision for the approximate quantile algorithm.
relativeError() - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Returns the relative error (or eps) of this CountMinSketch.
release(Seq<String>) - 接口 中的方法org.apache.spark.resource.ResourceAllocator
Release a sequence of resource addresses, these addresses must have been assigned.
rem(byte, byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
rem(Decimal, Decimal) - 类 中的方法org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
 
rem(int, int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
rem(long, long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
rem(short, short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
remainder(Decimal) - 类 中的方法org.apache.spark.sql.types.Decimal
 
remember(Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Sets each DStreams in this context to remember RDDs it generated in the last given duration.
remember(Duration) - 类 中的方法org.apache.spark.streaming.StreamingContext
Set each DStream in this context to remember RDDs it generated in the last given duration.
REMOTE_BLOCKS_FETCHED() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleRead$
 
REMOTE_BYTES_READ() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleRead$
 
REMOTE_BYTES_READ_TO_DISK() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleRead$
 
remoteBlocksFetched() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
 
remoteBlocksFetched() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetrics
 
remoteBytesRead() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
 
remoteBytesRead() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetrics
 
remoteBytesReadToDisk() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
 
remoteBytesReadToDisk() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetrics
 
remove(Param<T>) - 类 中的方法org.apache.spark.ml.param.ParamMap
Removes a key from this map and returns its value associated previously as an option.
remove(String) - 类 中的方法org.apache.spark.SparkConf
Remove a parameter from the configuration
remove() - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Remove this state.
remove(String) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
 
remove(Object) - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
remove() - 类 中的方法org.apache.spark.streaming.State
Remove the state if it exists.
remove(long) - 类 中的静态方法org.apache.spark.util.AccumulatorContext
Unregisters the AccumulatorV2 with the given ID, if any.
removeAllListeners() - 接口 中的方法org.apache.spark.util.ListenerBus
Remove all listeners and they won't receive any events.
RemoveBlock(BlockId) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveBlock
 
RemoveBlock$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveBlock$
 
RemoveBroadcast(long, boolean) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
 
RemoveBroadcast$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast$
 
removeDistribution(LiveExecutor) - 类 中的方法org.apache.spark.status.LiveRDD
 
RemoveExecutor(String, org.apache.spark.scheduler.ExecutorLossReason) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
 
RemoveExecutor(String) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
 
RemoveExecutor$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor$
 
RemoveExecutor$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveExecutor$
 
removeFromDriver() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
 
removeListener(StreamingQueryListener) - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryManager
removeListener(L) - 接口 中的方法org.apache.spark.util.ListenerBus
Remove a listener and it won't receive any events.
removeListenerOnError(SparkListenerInterface) - 类 中的方法org.apache.spark.scheduler.AsyncEventQueue
 
removeListenerOnError(L) - 接口 中的方法org.apache.spark.util.ListenerBus
This can be overridden by subclasses if there is any extra cleanup to do when removing a listener.
removeMapOutput(int, BlockManagerId) - 类 中的方法org.apache.spark.ShuffleStatus
Remove the map output which was served by the specified block manager.
removeOutputsByFilter(Function1<BlockManagerId, Object>) - 类 中的方法org.apache.spark.ShuffleStatus
Removes all shuffle outputs which satisfies the filter.
removeOutputsOnExecutor(String) - 类 中的方法org.apache.spark.ShuffleStatus
Removes all map outputs associated with the specified executor.
removeOutputsOnHost(String) - 类 中的方法org.apache.spark.ShuffleStatus
Removes all shuffle outputs associated with this host.
removePartition(String) - 类 中的方法org.apache.spark.status.LiveRDD
 
removePartition(LiveRDDPartition) - 类 中的方法org.apache.spark.status.RDDPartitionSeq
 
removeProperty(String) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.NamespaceChange
Create a NamespaceChange for removing a namespace property.
removeProperty(String) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for removing a table property.
RemoveRdd(int) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveRdd
 
RemoveRdd$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveRdd$
 
removeReason() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
removeReason() - 类 中的方法org.apache.spark.status.LiveExecutor
 
removeSchedulable(Schedulable) - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
removeSelfEdges() - 类 中的方法org.apache.spark.graphx.GraphOps
Remove self edges.
removeShuffle(int, boolean) - 接口 中的方法org.apache.spark.shuffle.api.ShuffleDriverComponents
Removes shuffle data associated with the given shuffle.
RemoveShuffle(int) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
 
RemoveShuffle$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.RemoveShuffle$
 
removeShutdownDeleteDir(File) - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
 
removeShutdownHook(Object) - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
Remove a previously installed shutdown hook.
removeSparkListener(SparkListenerInterface) - 类 中的方法org.apache.spark.SparkContext
:: DeveloperApi :: Deregister the listener from Spark's listener bus.
removeStreamingListener(StreamingListener) - 类 中的方法org.apache.spark.streaming.StreamingContext
 
removeTime() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
removeTime() - 类 中的方法org.apache.spark.status.LiveExecutor
 
RemoveWorker(String, String, String) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
 
RemoveWorker$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker$
 
renameColumn(String[], String) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for renaming a field.
renameFunction(String, String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Rename an existing function in the database.
renamePartitions(String, String, Seq<Map<String, String>>, Seq<Map<String, String>>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Rename one or many existing table partitions, assuming they exist.
renameTable(Identifier, Identifier) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
renameTable(Identifier, Identifier) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableCatalog
Renames a table in the catalog.
rep(Function0<Parsers.Parser<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
rep1(Function0<Parsers.Parser<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
rep1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
rep1sep(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Object>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
repartition(int) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD that has exactly numPartitions partitions.
repartition(int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a new RDD that has exactly numPartitions partitions.
repartition(int) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return a new RDD that has exactly numPartitions partitions.
repartition(int, Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Return a new RDD that has exactly numPartitions partitions.
repartition(int, Column...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset partitioned by the given partitioning expressions into numPartitions.
repartition(Column...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset partitioned by the given partitioning expressions, using spark.sql.shuffle.partitions as number of partitions.
repartition(int) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset that has exactly numPartitions partitions.
repartition(int, Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset partitioned by the given partitioning expressions into numPartitions.
repartition(Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset partitioned by the given partitioning expressions, using spark.sql.shuffle.partitions as number of partitions.
repartition(int) - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
Return a new DStream with an increased or decreased level of parallelism.
repartition(int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream with an increased or decreased level of parallelism.
repartition(int) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream with an increased or decreased level of parallelism.
repartitionAndSortWithinPartitions(Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
repartitionAndSortWithinPartitions(Partitioner, Comparator<K>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
repartitionAndSortWithinPartitions(Partitioner) - 类 中的方法org.apache.spark.rdd.OrderedRDDFunctions
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
repartitionByRange(int, Column...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset partitioned by the given partitioning expressions into numPartitions.
repartitionByRange(Column...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset partitioned by the given partitioning expressions, using spark.sql.shuffle.partitions as number of partitions.
repartitionByRange(int, Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset partitioned by the given partitioning expressions into numPartitions.
repartitionByRange(Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset partitioned by the given partitioning expressions, using spark.sql.shuffle.partitions as number of partitions.
repeat(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Repeats a string column n times, and returns it as a new string column.
replace() - 接口 中的方法org.apache.spark.sql.CreateTableWriter
Replace an existing table with the contents of the data frame.
replace(String, Map<T, T>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Replaces values matching keys in replacement map with the corresponding values.
replace(String[], Map<T, T>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
Replaces values matching keys in replacement map with the corresponding values.
replace(String, Map<T, T>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Replaces values matching keys in replacement map.
replace(Seq<String>, Map<T, T>) - 类 中的方法org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Replaces values matching keys in replacement map.
replace() - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
replaceCharType(DataType) - 类 中的静态方法org.apache.spark.sql.types.HiveStringType
 
replicas() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
 
ReplicateBlock(BlockId, Seq<BlockManagerId>, int) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
 
ReplicateBlock$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.ReplicateBlock$
 
replicatedVertexView() - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
replication() - 类 中的方法org.apache.spark.storage.StorageLevel
 
reply(Object) - 接口 中的方法org.apache.spark.rpc.RpcCallContext
Reply a message to the sender.
repN(int, Function0<Parsers.Parser<T>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
report() - 接口 中的方法org.apache.spark.metrics.sink.Sink
 
reportError(String, Throwable) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Report exceptions in receiving data.
repsep(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Object>>) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
requestedTotal() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
 
requesterHost() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
 
requestExecutors(int) - 接口 中的方法org.apache.spark.ExecutorAllocationClient
Request an additional number of executors from the cluster manager.
RequestExecutors(int, int, Map<String, Object>, Set<String>) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
 
requestExecutors(int) - 类 中的方法org.apache.spark.SparkContext
:: DeveloperApi :: Request an additional number of executors from the cluster manager.
RequestExecutors$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors$
 
requestTotalExecutors(int, int, Map<String, Object>) - 接口 中的方法org.apache.spark.ExecutorAllocationClient
Update the cluster manager on our scheduling needs.
requestTotalExecutors(int, int, Map<String, Object>) - 类 中的方法org.apache.spark.SparkContext
Update the cluster manager on our scheduling needs.
res() - 类 中的方法org.apache.spark.mllib.optimization.NNLS.Workspace
 
reservoirSampleAndCount(Iterator<T>, int, long, ClassTag<T>) - 类 中的静态方法org.apache.spark.util.random.SamplingUtils
Reservoir sampling implementation that also returns the input size.
reset() - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
Resets the values of all metrics to zero.
reset() - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Used for testing only.
reset() - 类 中的方法org.apache.spark.storage.BufferReleasingInputStream
 
reset() - 类 中的方法org.apache.spark.util.AccumulatorV2
Resets this accumulator, which is zero value. i.e. call isZero must return true.
reset() - 类 中的方法org.apache.spark.util.CollectionAccumulator
 
reset() - 类 中的方法org.apache.spark.util.DoubleAccumulator
 
reset() - 类 中的方法org.apache.spark.util.LongAccumulator
 
resetTerminated() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryManager
Forget about past terminated queries so that awaitAnyTermination() can be used again to wait for new terminations.
residualDegreeOfFreedom() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
residualDegreeOfFreedomNull() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
residuals() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
Get the default residuals (deviance residuals) of the fitted model.
residuals(String) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
Get the residuals of the fitted model by type.
residuals() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
ResolveHiveSerdeTable - org.apache.spark.sql.hive中的类
Determine the database, serde/format and schema of the Hive serde table, according to the storage properties.
ResolveHiveSerdeTable(SparkSession) - 类 的构造器org.apache.spark.sql.hive.ResolveHiveSerdeTable
 
resolveURI(String) - 类 中的静态方法org.apache.spark.util.Utils
Return a well-formed URI for the file described by a user input string.
resolveURIs(String) - 类 中的静态方法org.apache.spark.util.Utils
Resolve a comma-separated list of paths.
resourceAddresses() - 接口 中的方法org.apache.spark.resource.ResourceAllocator
 
ResourceAllocator - org.apache.spark.resource中的接口
Trait used to help executor/worker allocate resources.
ResourceInformation - org.apache.spark.resource中的类
Class to hold information about a type of Resource.
ResourceInformation(String, String[]) - 类 的构造器org.apache.spark.resource.ResourceInformation
 
ResourceInformationJson - org.apache.spark.resource中的类
A case class to simplify JSON serialization of ResourceInformation.
ResourceInformationJson(String, Seq<String>) - 类 的构造器org.apache.spark.resource.ResourceInformationJson
 
resourceName() - 接口 中的方法org.apache.spark.resource.ResourceAllocator
 
resources() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
resources() - 类 中的方法org.apache.spark.BarrierTaskContext
 
resources() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
 
resources() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
 
resources() - 类 中的方法org.apache.spark.SparkContext
 
resources() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
resources() - 类 中的方法org.apache.spark.status.LiveExecutor
 
resources() - 类 中的方法org.apache.spark.TaskContext
Resources allocated to the task.
resourcesInfo() - 类 中的方法org.apache.spark.scheduler.cluster.ExecutorInfo
 
resourcesJMap() - 类 中的方法org.apache.spark.BarrierTaskContext
 
resourcesJMap() - 类 中的方法org.apache.spark.TaskContext
(java-specific) Resources allocated to the task.
resourcesMapFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
resourcesMapToJson(Map<String, ResourceInformation>) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
resourcesMeetRequirements(Map<String, Object>, Seq<ResourceRequirement>) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
ResourceUtils - org.apache.spark.resource中的类
 
ResourceUtils() - 类 的构造器org.apache.spark.resource.ResourceUtils
 
responder() - 类 中的方法org.apache.spark.ui.JettyUtils.ServletParams
 
responseFromBackup(String) - 类 中的静态方法org.apache.spark.util.Utils
Return true if the response message is sent from a backup Master on standby.
restart(String) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Restart the receiver.
restart(String, Throwable) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Restart the receiver.
restart(String, Throwable, int) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Restart the receiver.
ResubmitFailedStages - org.apache.spark.scheduler中的类
 
ResubmitFailedStages() - 类 的构造器org.apache.spark.scheduler.ResubmitFailedStages
 
Resubmitted - org.apache.spark中的类
:: DeveloperApi :: A org.apache.spark.scheduler.ShuffleMapTask that completed successfully earlier, but we lost the executor before the stage completed.
Resubmitted() - 类 的构造器org.apache.spark.Resubmitted
 
result(Duration, CanAwait) - 类 中的方法org.apache.spark.ComplexFutureAction
 
result(Duration, CanAwait) - 接口 中的方法org.apache.spark.FutureAction
Awaits and returns the result (of type T) of this action.
result(Duration, CanAwait) - 类 中的方法org.apache.spark.SimpleFutureAction
 
RESULT_SERIALIZATION_TIME() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
RESULT_SERIALIZATION_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.TaskDetailsClassNames
 
RESULT_SERIALIZATION_TIME() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
RESULT_SIZE() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
RESULT_SIZE() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
resultFetchStart() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
resultSerializationTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
resultSerializationTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
resultSerializationTime() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
resultSetToObjectArray(ResultSet) - 类 中的静态方法org.apache.spark.rdd.JdbcRDD
 
resultSize() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
resultSize() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
resultSize() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
RETAINED_APPLICATIONS() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
RETAINED_APPLICATIONS() - 类 中的静态方法org.apache.spark.internal.config.History
 
RETAINED_DRIVERS() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
RetrieveDelegationTokens$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveDelegationTokens$
 
RetrieveLastAllocatedExecutorId$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$
 
RetrieveSparkAppConfig$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig$
 
retryWaitMs(SparkConf) - 类 中的静态方法org.apache.spark.util.RpcUtils
Returns the configured number of milliseconds to wait on each retry
ReturnStatementFinder - org.apache.spark.util中的类
 
ReturnStatementFinder(Option<String>) - 类 的构造器org.apache.spark.util.ReturnStatementFinder
 
reverse() - 类 中的方法org.apache.spark.graphx.EdgeDirection
Reverse the direction of an edge.
reverse() - 类 中的方法org.apache.spark.graphx.EdgeRDD
Reverse all the edges in this RDD.
reverse() - 类 中的方法org.apache.spark.graphx.Graph
Reverses all edges in the graph.
reverse() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
reverse() - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
reverse(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns a reversed string or an array with reverse order of elements.
reverse() - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
reverse() - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
reverse() - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
reverse() - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
reverse() - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
reverse() - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
reverse() - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
reversed() - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
reversed() - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
reversed() - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
reversed() - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
reversed() - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
reversed() - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
reversed() - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
reverseRoutingTables() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
reverseRoutingTables() - 类 中的方法org.apache.spark.graphx.VertexRDD
Returns a new VertexRDD reflecting a reversal of all edge directions in the corresponding EdgeRDD.
ReviveOffers - org.apache.spark.scheduler.local中的类
 
ReviveOffers() - 类 的构造器org.apache.spark.scheduler.local.ReviveOffers
 
reviveOffers() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
 
ReviveOffers$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers$
 
RewritableTransform - org.apache.spark.sql.connector.expressions中的接口
Allows Spark to rewrite the given references of the transform during analysis.
RFormula - org.apache.spark.ml.feature中的类
Implements the transforms required for fitting a dataset against an R model formula.
RFormula(String) - 类 的构造器org.apache.spark.ml.feature.RFormula
 
RFormula() - 类 的构造器org.apache.spark.ml.feature.RFormula
 
RFormulaBase - org.apache.spark.ml.feature中的接口
Base trait for RFormula and RFormulaModel.
RFormulaModel - org.apache.spark.ml.feature中的类
Model fitted by RFormula.
RFormulaParser - org.apache.spark.ml.feature中的类
Limited implementation of R formula parsing.
RFormulaParser() - 类 的构造器org.apache.spark.ml.feature.RFormulaParser
 
RidgeRegressionModel - org.apache.spark.mllib.regression中的类
Regression model trained using RidgeRegression.
RidgeRegressionModel(Vector, double) - 类 的构造器org.apache.spark.mllib.regression.RidgeRegressionModel
 
RidgeRegressionWithSGD - org.apache.spark.mllib.regression中的类
Train a regression model with L2-regularization using Stochastic Gradient Descent.
right() - 类 中的方法org.apache.spark.sql.sources.And
 
right() - 类 中的方法org.apache.spark.sql.sources.Or
 
rightCategories() - 类 中的方法org.apache.spark.ml.tree.CategoricalSplit
Get sorted categories which split to the right
rightChild() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
rightChild() - 类 中的方法org.apache.spark.ml.tree.InternalNode
 
rightChildIndex(int) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Return the index of the right child of this node.
rightImpurity() - 类 中的方法org.apache.spark.mllib.tree.model.InformationGainStats
 
rightNode() - 类 中的方法org.apache.spark.mllib.tree.model.Node
 
rightNodeId() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
rightOuterJoin(JavaPairRDD<K, W>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Perform a right outer join of this and other.
rightOuterJoin(JavaPairRDD<K, W>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Perform a right outer join of this and other.
rightOuterJoin(JavaPairRDD<K, W>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Perform a right outer join of this and other.
rightOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Perform a right outer join of this and other.
rightOuterJoin(RDD<Tuple2<K, W>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Perform a right outer join of this and other.
rightOuterJoin(RDD<Tuple2<K, W>>, int) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Perform a right outer join of this and other.
rightOuterJoin(JavaPairDStream<K, W>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.
rightOuterJoin(JavaPairDStream<K, W>, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.
rightOuterJoin(JavaPairDStream<K, W>, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.
rightOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.
rightOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.
rightOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.
rightPredict() - 类 中的方法org.apache.spark.mllib.tree.model.InformationGainStats
 
rint(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the double value that is closest in value to the argument and is equal to a mathematical integer.
rint(String) - 类 中的静态方法org.apache.spark.sql.functions
Returns the double value that is closest in value to the argument and is equal to a mathematical integer.
rlike(String) - 类 中的方法org.apache.spark.sql.Column
SQL RLIKE expression (LIKE with Regex).
RMATa() - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
 
RMATb() - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
 
RMATc() - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
 
RMATd() - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
 
rmatGraph(SparkContext, int, int) - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
A random graph generator using the R-MAT model, proposed in "R-MAT: A Recursive Model for Graph Mining" by Chakrabarti et al.
rnd() - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
RobustScaler - org.apache.spark.ml.feature中的类
Scale features using statistics that are robust to outliers.
RobustScaler(String) - 类 的构造器org.apache.spark.ml.feature.RobustScaler
 
RobustScaler() - 类 的构造器org.apache.spark.ml.feature.RobustScaler
 
RobustScalerModel - org.apache.spark.ml.feature中的类
Model fitted by RobustScaler.
RobustScalerParams - org.apache.spark.ml.feature中的接口
roc() - 接口 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
Returns the receiver operating characteristic (ROC) curve, which is a Dataframe having two fields (FPR, TPR) with (0.0, 0.0) prepended and (1.0, 1.0) appended to it.
roc() - 类 中的方法org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
 
roc() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Returns the receiver operating characteristic (ROC) curve, which is an RDD of (false positive rate, true positive rate) with (0.0, 0.0) prepended and (1.0, 1.0) appended to it.
rolledOver() - 接口 中的方法org.apache.spark.util.logging.RollingPolicy
Notify that rollover has occurred
RollingPolicy - org.apache.spark.util.logging中的接口
Defines the policy based on which RollingFileAppender will generate rolling files.
rollup(Column...) - 类 中的方法org.apache.spark.sql.Dataset
Create a multi-dimensional rollup for the current Dataset using the specified columns, so we can run aggregation on them.
rollup(String, String...) - 类 中的方法org.apache.spark.sql.Dataset
Create a multi-dimensional rollup for the current Dataset using the specified columns, so we can run aggregation on them.
rollup(Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Create a multi-dimensional rollup for the current Dataset using the specified columns, so we can run aggregation on them.
rollup(String, Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Create a multi-dimensional rollup for the current Dataset using the specified columns, so we can run aggregation on them.
RollupType$() - 类 的构造器org.apache.spark.sql.RelationalGroupedDataset.RollupType$
 
rootAllocator() - 类 中的静态方法org.apache.spark.sql.util.ArrowUtils
 
rootMeanSquaredError() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
Returns the root mean squared error, which is defined as the square root of the mean squared error.
rootMeanSquaredError() - 类 中的方法org.apache.spark.mllib.evaluation.RegressionMetrics
Returns the root mean squared error, which is defined as the square root of the mean squared error.
rootNode() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
rootNode() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
rootNode() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeModel
Root of the decision tree
rootPool() - 接口 中的方法org.apache.spark.scheduler.SchedulableBuilder
 
rootPool() - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
round(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the value of the column e rounded to 0 decimal places with HALF_UP round mode.
round(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Round the value of e to scale decimal places with HALF_UP round mode if scale is greater than or equal to 0 or at integral part when scale is less than 0.
ROUND_CEILING() - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
ROUND_FLOOR() - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
ROUND_HALF_EVEN() - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
ROUND_HALF_UP() - 类 中的静态方法org.apache.spark.sql.types.Decimal
 
ROW() - 类 中的静态方法org.apache.spark.api.r.SerializationFormats
 
Row - org.apache.spark.sql中的接口
Represents one row of output from a relational operator.
row(T) - 接口 中的方法org.apache.spark.ui.PagedTable
 
row_number() - 类 中的静态方法org.apache.spark.sql.functions
Window function: returns a sequential number starting at 1 within a window partition.
RowFactory - org.apache.spark.sql中的类
A factory class used to construct Row objects.
RowFactory() - 类 的构造器org.apache.spark.sql.RowFactory
 
rowIndices() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
rowIndices() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
rowIter() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Returns an iterator of row vectors.
rowIter() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Returns an iterator of row vectors.
rowIterator() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarBatch
Returns an iterator over the rows in this batch.
RowMatrix - org.apache.spark.mllib.linalg.distributed中的类
Represents a row-oriented distributed Matrix with no meaningful row indices.
RowMatrix(RDD<Vector>, long, int) - 类 的构造器org.apache.spark.mllib.linalg.distributed.RowMatrix
 
RowMatrix(RDD<Vector>) - 类 的构造器org.apache.spark.mllib.linalg.distributed.RowMatrix
Alternative constructor leaving matrix dimensions to be determined automatically.
rows() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
 
rows() - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
 
rowsBetween(long, long) - 类 中的静态方法org.apache.spark.sql.expressions.Window
Creates a WindowSpec with the frame boundaries defined, from start (inclusive) to end (inclusive).
rowsBetween(long, long) - 类 中的方法org.apache.spark.sql.expressions.WindowSpec
Defines the frame boundaries, from start (inclusive) to end (inclusive).
rowsPerBlock() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
rPackages() - 类 中的静态方法org.apache.spark.api.r.RUtils
 
rpad(Column, int, String) - 类 中的静态方法org.apache.spark.sql.functions
Right-pad the string column with pad to a length of len.
RpcAbortException - org.apache.spark.rpc中的异常错误
An exception thrown if the RPC is aborted.
RpcAbortException(String) - 异常错误 的构造器org.apache.spark.rpc.RpcAbortException
 
RpcCallContext - org.apache.spark.rpc中的接口
A callback that RpcEndpoint can use to send back a message or failure.
RpcEndpoint - org.apache.spark.rpc中的接口
An end point for the RPC that defines what functions to trigger given a message.
rpcEnv() - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
The RpcEnv that this RpcEndpoint is registered to.
RpcEnvFactory - org.apache.spark.rpc中的接口
A factory class to create the RpcEnv.
RpcEnvFileServer - org.apache.spark.rpc中的接口
A server used by the RpcEnv to server files to other processes owned by the application.
RpcUtils - org.apache.spark.util中的类
 
RpcUtils() - 类 的构造器org.apache.spark.util.RpcUtils
 
RRDD<T> - org.apache.spark.api.r中的类
An RDD that stores serialized R objects as Array[Byte].
RRDD(RDD<T>, byte[], String, String, byte[], Object[], ClassTag<T>) - 类 的构造器org.apache.spark.api.r.RRDD
 
RRunnerModes - org.apache.spark.api.r中的类
 
RRunnerModes() - 类 的构造器org.apache.spark.api.r.RRunnerModes
 
rtrim(Column) - 类 中的静态方法org.apache.spark.sql.functions
Trim the spaces from right end for the specified string value.
rtrim(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Trim the specified character string from right end for the specified string column.
ruleName() - 类 中的静态方法org.apache.spark.sql.dynamicpruning.CleanupDynamicPruningFilters
 
ruleName() - 类 中的静态方法org.apache.spark.sql.dynamicpruning.PartitionPruning
 
ruleName() - 类 中的静态方法org.apache.spark.sql.hive.HiveAnalysis
 
run(Graph<VD, ED>, int, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.ConnectedComponents
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
run(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.ConnectedComponents
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
run(Graph<VD, ED>, int, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.LabelPropagation
Run static Label Propagation for detecting communities in networks.
run(Graph<VD, ED>, int, double, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.PageRank
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
run(Graph<VD, ED>, Seq<Object>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.ShortestPaths
Computes shortest paths to the given set of landmark vertices.
run(Graph<VD, ED>, int, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.StronglyConnectedComponents
Compute the strongly connected component (SCC) of each vertex and return a graph with the vertex value containing the lowest vertex id in the SCC containing that vertex.
run(RDD<Edge<Object>>, SVDPlusPlus.Conf) - 类 中的静态方法org.apache.spark.graphx.lib.SVDPlusPlus
Implement SVD++ based on "Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model", available at here.
run(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.TriangleCount
 
run(RDD<org.apache.spark.ml.feature.Instance>, BoostingStrategy, long, String) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
Method to train a gradient boosting model
run(RDD<LabeledPoint>, Strategy, int, String, long) - 类 中的静态方法org.apache.spark.ml.tree.impl.RandomForest
Train a random forest.
run(RDD<org.apache.spark.ml.feature.Instance>, Strategy, int, String, long, Option<org.apache.spark.ml.util.Instrumentation>, boolean, Option<String>) - 类 中的静态方法org.apache.spark.ml.tree.impl.RandomForest
Train a random forest.
run(RDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
Run Logistic Regression with the configured parameters on an input RDD of LabeledPoint entries.
run(RDD<LabeledPoint>, Vector) - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
Run Logistic Regression with the configured parameters on an input RDD of LabeledPoint entries starting from the initial weights provided.
run(RDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayes
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.
run(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Runs the bisecting k-means algorithm.
run(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Java-friendly version of run().
run(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Perform expectation maximization
run(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Java-friendly version of run()
run(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Train a K-means model on the given set of points; data should be cached for high performance, because this is an iterative algorithm.
run(RDD<Tuple2<Object, Vector>>) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Learn an LDA model using the given dataset.
run(JavaPairRDD<Long, Vector>) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Java-friendly version of run()
run(Graph<Object, Object>) - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClustering
Run the PIC algorithm on Graph.
run(RDD<Tuple3<Object, Object, Object>>) - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClustering
Run the PIC algorithm.
run(JavaRDD<Tuple3<Long, Long, Double>>) - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClustering
A Java-friendly version of PowerIterationClustering.run.
run(RDD<FPGrowth.FreqItemset<Item>>, ClassTag<Item>) - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules
Computes the association rules with confidence above minConfidence.
run(RDD<FPGrowth.FreqItemset<Item>>, Map<Item, Object>, ClassTag<Item>) - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules
Computes the association rules with confidence above minConfidence.
run(JavaRDD<FPGrowth.FreqItemset<Item>>) - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules
Java-friendly version of run.
run(RDD<Object>, ClassTag<Item>) - 类 中的方法org.apache.spark.mllib.fpm.FPGrowth
Computes an FP-Growth model that contains frequent itemsets.
run(JavaRDD<Basket>) - 类 中的方法org.apache.spark.mllib.fpm.FPGrowth
Java-friendly version of run.
run(RDD<Object[]>, ClassTag<Item>) - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan
Finds the complete set of frequent sequential patterns in the input sequences of itemsets.
run(JavaRDD<Sequence>) - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan
A Java-friendly version of run() that reads sequences from a JavaRDD and returns frequent sequences in a PrefixSpanModel.
run(RDD<Rating>) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Run ALS with the configured parameters on an input RDD of Rating objects.
run(JavaRDD<Rating>) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Java-friendly version of ALS.run.
run(RDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.
run(RDD<LabeledPoint>, Vector) - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries starting from the initial weights provided.
run(RDD<Tuple3<Object, Object, Object>>) - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegression
Run IsotonicRegression algorithm to obtain isotonic regression model.
run(JavaRDD<Tuple3<Double, Double, Double>>) - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegression
Run pool adjacent violators algorithm to obtain isotonic regression model.
run(RDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.tree.DecisionTree
Method to train a decision tree model over an RDD
run(RDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.tree.GradientBoostedTrees
Method to train a gradient boosting model
run(JavaRDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.tree.GradientBoostedTrees
Java-friendly API for org.apache.spark.mllib.tree.GradientBoostedTrees.run.
run(RDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.tree.RandomForest
Method to train a decision tree model over an RDD
run(SparkSession, SparkPlan) - 接口 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectBase
 
run(SparkSession, SparkPlan) - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
run(SparkSession, SparkPlan) - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
Inserts all the rows in the table into Hive.
run() - 类 中的方法org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread
 
run() - 类 中的方法org.apache.spark.util.SparkShutdownHook
 
runApproximateJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, ApproximateEvaluator<U, R>, long) - 类 中的方法org.apache.spark.SparkContext
:: DeveloperApi :: Run a job that can return approximate results.
runId() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Returns the unique id of this run of the query.
runId() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
 
runId() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
 
runId() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
runInNewThread(String, boolean, Function0<T>) - 类 中的静态方法org.apache.spark.util.ThreadUtils
Run a piece of code in a new thread and return the result.
runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, Seq<Object>, Function2<Object, U, BoxedUnit>, ClassTag<U>) - 类 中的方法org.apache.spark.SparkContext
Run a function on a given set of partitions in an RDD and pass the results to the given handler function.
runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, Seq<Object>, ClassTag<U>) - 类 中的方法org.apache.spark.SparkContext
Run a function on a given set of partitions in an RDD and return the results as an array.
runJob(RDD<T>, Function1<Iterator<T>, U>, Seq<Object>, ClassTag<U>) - 类 中的方法org.apache.spark.SparkContext
Run a function on a given set of partitions in an RDD and return the results as an array.
runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, ClassTag<U>) - 类 中的方法org.apache.spark.SparkContext
Run a job on all partitions in an RDD and return the results in an array.
runJob(RDD<T>, Function1<Iterator<T>, U>, ClassTag<U>) - 类 中的方法org.apache.spark.SparkContext
Run a job on all partitions in an RDD and return the results in an array.
runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, Function2<Object, U, BoxedUnit>, ClassTag<U>) - 类 中的方法org.apache.spark.SparkContext
Run a job on all partitions in an RDD and pass the results to a handler function.
runJob(RDD<T>, Function1<Iterator<T>, U>, Function2<Object, U, BoxedUnit>, ClassTag<U>) - 类 中的方法org.apache.spark.SparkContext
Run a job on all partitions in an RDD and pass the results to a handler function.
runLBFGS(RDD<Tuple2<Object, Vector>>, Gradient, Updater, int, double, int, double, Vector) - 类 中的静态方法org.apache.spark.mllib.optimization.LBFGS
Run Limited-memory BFGS (L-BFGS) in parallel.
runMiniBatchSGD(RDD<Tuple2<Object, Vector>>, Gradient, Updater, double, int, double, double, Vector, double) - 类 中的静态方法org.apache.spark.mllib.optimization.GradientDescent
Run stochastic gradient descent (SGD) in parallel using mini batches.
runMiniBatchSGD(RDD<Tuple2<Object, Vector>>, Gradient, Updater, double, int, double, double, Vector) - 类 中的静态方法org.apache.spark.mllib.optimization.GradientDescent
Alias of runMiniBatchSGD with convergenceTol set to default value of 0.001.
running() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
RUNNING() - 类 中的静态方法org.apache.spark.TaskState
 
runningTasks() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
runParallelPersonalizedPageRank(Graph<VD, ED>, int, double, long[], ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.PageRank
Run Personalized PageRank for a fixed number of iterations, for a set of starting nodes in parallel.
runPreCanonicalized(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.TriangleCount
 
runSqlHive(String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Runs a HiveQL command using Hive, returning the results as a list of strings.
runtime() - 类 中的方法org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
 
RuntimeConfig - org.apache.spark.sql中的类
Runtime configuration interface for Spark.
RuntimeInfo - org.apache.spark.status.api.v1中的类
 
RuntimePercentage - org.apache.spark.scheduler中的类
 
RuntimePercentage(double, Option<Object>, double) - 类 的构造器org.apache.spark.scheduler.RuntimePercentage
 
runUntilConvergence(Graph<VD, ED>, double, double, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.PageRank
Run a dynamic version of PageRank returning a graph with vertex attributes containing the PageRank and edge attributes containing the normalized edge weight.
runUntilConvergenceWithOptions(Graph<VD, ED>, double, double, Option<Object>, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.PageRank
Run a dynamic version of PageRank returning a graph with vertex attributes containing the PageRank and edge attributes containing the normalized edge weight.
runWithOptions(Graph<VD, ED>, int, double, Option<Object>, ClassTag<VD>, ClassTag<ED>) - 类 中的静态方法org.apache.spark.graphx.lib.PageRank
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
runWithValidation(RDD<org.apache.spark.ml.feature.Instance>, RDD<org.apache.spark.ml.feature.Instance>, BoostingStrategy, long, String) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
Method to validate a gradient boosting model
runWithValidation(RDD<LabeledPoint>, RDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.tree.GradientBoostedTrees
Method to validate a gradient boosting model
runWithValidation(JavaRDD<LabeledPoint>, JavaRDD<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.tree.GradientBoostedTrees
Java-friendly API for org.apache.spark.mllib.tree.GradientBoostedTrees.runWithValidation.
RUtils - org.apache.spark.api.r中的类
 
RUtils() - 类 的构造器org.apache.spark.api.r.RUtils
 
RWrappers - org.apache.spark.ml.r中的类
This is the Scala stub of SparkR read.ml.
RWrappers() - 类 的构造器org.apache.spark.ml.r.RWrappers
 
RWrapperUtils - org.apache.spark.ml.r中的类
 
RWrapperUtils() - 类 的构造器org.apache.spark.ml.r.RWrapperUtils
 

S

s() - 类 中的方法org.apache.spark.mllib.linalg.SingularValueDecomposition
 
safeCall(Function0<T>) - 接口 中的方法org.apache.spark.security.CryptoStreamUtils.BaseErrorHandler
 
SAFEMODE_CHECK_INTERVAL_S() - 类 中的静态方法org.apache.spark.internal.config.History
 
sameThread() - 类 中的静态方法org.apache.spark.util.ThreadUtils
An ExecutionContextExecutor that runs each task in the thread that invokes execute/submit.
sample(boolean, Double) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return a sampled subset of this RDD.
sample(boolean, Double, long) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return a sampled subset of this RDD.
sample(boolean, double) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a sampled subset of this RDD.
sample(boolean, double, long) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a sampled subset of this RDD.
sample(boolean, double) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return a sampled subset of this RDD with a random seed.
sample(boolean, double, long) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return a sampled subset of this RDD, with a user-supplied seed.
sample(boolean, double, long) - 类 中的方法org.apache.spark.rdd.RDD
Return a sampled subset of this RDD.
sample(double, long) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by sampling a fraction of rows (without replacement), using a user-supplied seed.
sample(double) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by sampling a fraction of rows (without replacement), using a random seed.
sample(boolean, double, long) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by sampling a fraction of rows, using a user-supplied seed.
sample(boolean, double) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by sampling a fraction of rows, using a random seed.
sample() - 类 中的方法org.apache.spark.util.random.BernoulliCellSampler
 
sample() - 类 中的方法org.apache.spark.util.random.BernoulliSampler
 
sample() - 类 中的方法org.apache.spark.util.random.PoissonSampler
 
sample(Iterator<T>) - 类 中的方法org.apache.spark.util.random.PoissonSampler
 
sample(Iterator<T>) - 接口 中的方法org.apache.spark.util.random.RandomSampler
take a random sample
sample() - 接口 中的方法org.apache.spark.util.random.RandomSampler
Whether to sample the next item or not.
sampleBy(String, Map<T, Object>, long) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Returns a stratified sample without replacement based on the fraction given on each stratum.
sampleBy(String, Map<T, Double>, long) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Returns a stratified sample without replacement based on the fraction given on each stratum.
sampleBy(Column, Map<T, Object>, long) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
Returns a stratified sample without replacement based on the fraction given on each stratum.
sampleBy(Column, Map<T, Double>, long) - 类 中的方法org.apache.spark.sql.DataFrameStatFunctions
(Java-specific) Returns a stratified sample without replacement based on the fraction given on each stratum.
sampleByKey(boolean, Map<K, Double>, long) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a subset of this RDD sampled by key (via stratified sampling).
sampleByKey(boolean, Map<K, Double>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a subset of this RDD sampled by key (via stratified sampling).
sampleByKey(boolean, Map<K, Object>, long) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return a subset of this RDD sampled by key (via stratified sampling).
sampleByKeyExact(boolean, Map<K, Double>, long) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
sampleByKeyExact(boolean, Map<K, Double>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
sampleByKeyExact(boolean, Map<K, Object>, long) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
SamplePathFilter - org.apache.spark.ml.image中的类
Filter that allows loading a fraction of HDFS files.
SamplePathFilter() - 类 的构造器org.apache.spark.ml.image.SamplePathFilter
 
samplePointsPerPartitionHint() - 类 中的方法org.apache.spark.RangePartitioner
 
sampleRatio() - 类 中的方法org.apache.spark.ml.image.SamplePathFilter
 
sampleStdev() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Compute the sample standard deviation of this RDD's elements (which corrects for bias in estimating the standard deviation by dividing by N-1 instead of N).
sampleStdev() - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Compute the sample standard deviation of this RDD's elements (which corrects for bias in estimating the standard deviation by dividing by N-1 instead of N).
sampleStdev() - 类 中的方法org.apache.spark.util.StatCounter
Return the sample standard deviation of the values, which corrects for bias in estimating the variance by dividing by N-1 instead of N.
sampleVariance() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Compute the sample variance of this RDD's elements (which corrects for bias in estimating the standard variance by dividing by N-1 instead of N).
sampleVariance() - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Compute the sample variance of this RDD's elements (which corrects for bias in estimating the variance by dividing by N-1 instead of N).
sampleVariance() - 类 中的方法org.apache.spark.util.StatCounter
Return the sample variance, which corrects for bias in estimating the variance by dividing by N-1 instead of N.
SamplingUtils - org.apache.spark.util.random中的类
 
SamplingUtils() - 类 的构造器org.apache.spark.util.random.SamplingUtils
 
sanitizeDirName(String) - 类 中的静态方法org.apache.spark.util.Utils
 
satisfy(Distribution) - 接口 中的方法org.apache.spark.sql.connector.read.partitioning.Partitioning
Returns true if this partitioning can satisfy the given distribution, which means Spark does not need to shuffle the output data of this data source for some certain operations.
save(String) - 接口 中的方法org.apache.spark.ml.util.MLWritable
Saves this ML instance to the input path, a shortcut of write.save(path).
save(String) - 类 中的方法org.apache.spark.ml.util.MLWriter
Saves the ML instances to the input path.
save(SparkContext, String, String, int, int, Vector, double, Option<Object>) - 类 中的方法org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
Helper method for saving GLM classification model metadata and data.
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel
 
save(SparkContext, String, org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
 
save(SparkContext, String, org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.classification.SVMModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
 
save(SparkContext, BisectingKMeansModel, String) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
 
save(SparkContext, BisectingKMeansModel, String) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0$
 
save(SparkContext, BisectingKMeansModel, String) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0$
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixtureModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel
 
save(SparkContext, KMeansModel, String) - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
 
save(SparkContext, KMeansModel, String) - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0$
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClusteringModel
 
save(SparkContext, PowerIterationClusteringModel, String) - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelectorModel
 
save(SparkContext, ChiSqSelectorModel, String) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.feature.Word2VecModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.fpm.FPGrowthModel
Save this model to the given path.
save(FPGrowthModel<?>, String) - 类 中的方法org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpanModel
Save this model to the given path.
save(PrefixSpanModel<?>, String) - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
Save this model to the given path.
save(MatrixFactorizationModel, String) - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
Saves a MatrixFactorizationModel, where user features are saved under data/users and product features are saved under data/products.
save(SparkContext, String, String, Vector, double) - 类 中的方法org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
Helper method for saving GLM regression model metadata and data.
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegressionModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.regression.LassoModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.regression.LinearRegressionModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.regression.RidgeRegressionModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
 
save(SparkContext, String, DecisionTreeModel) - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
 
save(SparkContext, String) - 类 中的方法org.apache.spark.mllib.tree.model.RandomForestModel
 
save(SparkContext, String) - 接口 中的方法org.apache.spark.mllib.util.Saveable
Save this model to the given path.
save(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame at the specified path.
save() - 类 中的方法org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame as the specified table.
Saveable - org.apache.spark.mllib.util中的接口
:: DeveloperApi :: Trait for models and transformers which may be saved as files.
saveAsHadoopDataset(JobConf) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system.
saveAsHadoopDataset(JobConf) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system.
saveAsHadoopFile(String, Class<?>, Class<?>, Class<F>, JobConf) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Output the RDD to any Hadoop-supported file system.
saveAsHadoopFile(String, Class<?>, Class<?>, Class<F>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Output the RDD to any Hadoop-supported file system.
saveAsHadoopFile(String, Class<?>, Class<?>, Class<F>, Class<? extends CompressionCodec>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Output the RDD to any Hadoop-supported file system, compressing with the supplied codec.
saveAsHadoopFile(String, ClassTag<F>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat class supporting the key and value types K and V in this RDD.
saveAsHadoopFile(String, Class<? extends CompressionCodec>, ClassTag<F>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat class supporting the key and value types K and V in this RDD.
saveAsHadoopFile(String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, Class<? extends CompressionCodec>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat class supporting the key and value types K and V in this RDD.
saveAsHadoopFile(String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, JobConf, Option<Class<? extends CompressionCodec>>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat class supporting the key and value types K and V in this RDD.
saveAsHadoopFiles(String, String) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Save each RDD in this DStream as a Hadoop file.
saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Save each RDD in this DStream as a Hadoop file.
saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, JobConf) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Save each RDD in this DStream as a Hadoop file.
saveAsHadoopFiles(String, String, ClassTag<F>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Save each RDD in this DStream as a Hadoop file.
saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, JobConf) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Save each RDD in this DStream as a Hadoop file.
SaveAsHiveFile - org.apache.spark.sql.hive.execution中的接口
 
saveAsHiveFile(SparkSession, SparkPlan, Configuration, org.apache.spark.sql.hive.HiveShim.ShimFileSinkDesc, String, Map<Map<String, String>, String>, Seq<Attribute>) - 接口 中的方法org.apache.spark.sql.hive.execution.SaveAsHiveFile
 
saveAsLibSVMFile(RDD<LabeledPoint>, String) - 类 中的静态方法org.apache.spark.mllib.util.MLUtils
Save labeled data in LIBSVM format.
saveAsNewAPIHadoopDataset(Configuration) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Output the RDD to any Hadoop-supported storage system, using a Configuration object for that storage system.
saveAsNewAPIHadoopDataset(Configuration) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Output the RDD to any Hadoop-supported storage system with new Hadoop API, using a Hadoop Configuration object for that storage system.
saveAsNewAPIHadoopFile(String, Class<?>, Class<?>, Class<F>, Configuration) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Output the RDD to any Hadoop-supported file system.
saveAsNewAPIHadoopFile(String, Class<?>, Class<?>, Class<F>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Output the RDD to any Hadoop-supported file system.
saveAsNewAPIHadoopFile(String, ClassTag<F>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat (mapreduce.OutputFormat) object supporting the key and value types K and V in this RDD.
saveAsNewAPIHadoopFile(String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, Configuration) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat (mapreduce.OutputFormat) object supporting the key and value types K and V in this RDD.
saveAsNewAPIHadoopFiles(String, String) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Save each RDD in this DStream as a Hadoop file.
saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Save each RDD in this DStream as a Hadoop file.
saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, Configuration) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Save each RDD in this DStream as a Hadoop file.
saveAsNewAPIHadoopFiles(String, String, ClassTag<F>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Save each RDD in this DStream as a Hadoop file.
saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, Configuration) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Save each RDD in this DStream as a Hadoop file.
saveAsObjectFile(String) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Save this RDD as a SequenceFile of serialized objects.
saveAsObjectFile(String) - 类 中的方法org.apache.spark.rdd.RDD
Save this RDD as a SequenceFile of serialized objects.
saveAsObjectFiles(String, String) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Save each RDD in this DStream as a Sequence file of serialized objects.
saveAsSequenceFile(String, Option<Class<? extends CompressionCodec>>) - 类 中的方法org.apache.spark.rdd.SequenceFileRDDFunctions
Output the RDD as a Hadoop SequenceFile using the Writable types we infer from the RDD's key and value types.
saveAsTable(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame as the specified table.
saveAsTextFile(String) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Save this RDD as a text file, using string representations of elements.
saveAsTextFile(String, Class<? extends CompressionCodec>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Save this RDD as a compressed text file, using string representations of elements.
saveAsTextFile(String) - 类 中的方法org.apache.spark.rdd.RDD
Save this RDD as a text file, using string representations of elements.
saveAsTextFile(String, Class<? extends CompressionCodec>) - 类 中的方法org.apache.spark.rdd.RDD
Save this RDD as a compressed text file, using string representations of elements.
saveAsTextFiles(String, String) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Save each RDD in this DStream as at text file, using string representation of elements.
savedTasks() - 类 中的方法org.apache.spark.status.LiveStage
 
saveImpl(Params, PipelineStage[], SparkContext, String) - 类 中的方法org.apache.spark.ml.Pipeline.SharedReadWrite$
Save metadata and stages for a Pipeline or PipelineModel - save metadata to path/metadata - save stages to stages/IDX_UID
saveImpl(M, String, SparkSession, JsonAST.JObject) - 类 中的静态方法org.apache.spark.ml.tree.EnsembleModelReadWrite
Helper method for saving a tree ensemble to disk.
SaveInstanceEnd - org.apache.spark.ml中的类
Event fired after MLWriter.save.
SaveInstanceEnd(String) - 类 的构造器org.apache.spark.ml.SaveInstanceEnd
 
SaveInstanceStart - org.apache.spark.ml中的类
Event fired before MLWriter.save.
SaveInstanceStart(String) - 类 的构造器org.apache.spark.ml.SaveInstanceStart
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
 
SaveLoadV1_0$() - 类 的构造器org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
 
SaveLoadV2_0$() - 类 的构造器org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
 
SaveLoadV2_0$() - 类 的构造器org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0$
 
SaveLoadV2_0$() - 类 的构造器org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0$
 
SaveLoadV3_0$() - 类 的构造器org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0$
 
SaveMode - org.apache.spark.sql中的枚举
SaveMode is used to specify the expected behavior of saving a DataFrame to a data source.
sc() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
sc() - 接口 中的方法org.apache.spark.ml.util.BaseReadWrite
Returns the underlying `SparkContext`.
sc() - 类 中的方法org.apache.spark.sql.SQLImplicits.StringToColumn
 
scal(double, Vector) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
x = a * x
scal(double, Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
x = a * x
scalaBoolean() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for Scala's primitive boolean type.
scalaByte() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for Scala's primitive byte type.
scalaDouble() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for Scala's primitive double type.
scalaFloat() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for Scala's primitive float type.
scalaInt() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for Scala's primitive int type.
scalaIntToJavaLong(DStream<Object>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
 
scalaLong() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for Scala's primitive long type.
scalaShort() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for Scala's primitive short type.
scalaToJavaLong(JavaPairDStream<K, Object>, ClassTag<K>) - 类 中的静态方法org.apache.spark.streaming.api.java.JavaPairDStream
 
scalaVersion() - 类 中的方法org.apache.spark.status.api.v1.RuntimeInfo
 
scale() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
scale() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
scale() - 类 中的方法org.apache.spark.mllib.random.GammaGenerator
 
scale() - 类 中的方法org.apache.spark.sql.types.Decimal
 
scale() - 类 中的方法org.apache.spark.sql.types.DecimalType
 
scalingVec() - 类 中的方法org.apache.spark.ml.feature.ElementwiseProduct
the vector to multiply with input vectors
scalingVec() - 类 中的方法org.apache.spark.mllib.feature.ElementwiseProduct
 
Scan - org.apache.spark.sql.connector.read中的接口
A logical representation of a data source scan.
ScanBuilder - org.apache.spark.sql.connector.read中的接口
An interface for building the Scan.
Schedulable - org.apache.spark.scheduler中的接口
An interface for schedulable entities.
SchedulableBuilder - org.apache.spark.scheduler中的接口
An interface to build Schedulable tree buildPools: build the tree nodes(pools) addTaskSetManager: build the leaf nodes(TaskSetManagers)
schedulableQueue() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
SCHEDULED() - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverState
 
SCHEDULER_DELAY() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SCHEDULER_DELAY() - 类 中的静态方法org.apache.spark.ui.jobs.TaskDetailsClassNames
 
SCHEDULER_DELAY() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
SchedulerBackend - org.apache.spark.scheduler中的接口
A backend interface for scheduling systems that allows plugging in different ones under TaskSchedulerImpl.
SchedulerBackendUtils - org.apache.spark.scheduler.cluster中的类
 
SchedulerBackendUtils() - 类 的构造器org.apache.spark.scheduler.cluster.SchedulerBackendUtils
 
schedulerDelay() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
schedulerDelay() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
schedulerDelay(TaskData) - 类 中的静态方法org.apache.spark.status.AppStatusUtils
 
schedulerDelay(long, long, long, long, long, long) - 类 中的静态方法org.apache.spark.status.AppStatusUtils
 
SchedulerPool - org.apache.spark.status中的类
 
SchedulerPool(String) - 类 的构造器org.apache.spark.status.SchedulerPool
 
SchedulingAlgorithm - org.apache.spark.scheduler中的接口
An interface for sort algorithm FIFO: FIFO algorithm between TaskSetManagers FS: FS algorithm between Pools, and FIFO or FS within Pools
schedulingDelay() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
schedulingDelay() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
Time taken for the first job of this batch to start processing from the time this batch was submitted to the streaming scheduler.
schedulingMode() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
SchedulingMode - org.apache.spark.scheduler中的类
"FAIR" and "FIFO" determines which policy is used to order tasks amongst a Schedulable's sub-queues "NONE" is used when the a Schedulable has no sub-queues.
SchedulingMode() - 类 的构造器org.apache.spark.scheduler.SchedulingMode
 
schedulingMode() - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
schedulingPool() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
schedulingPool() - 类 中的方法org.apache.spark.status.LiveStage
 
schema() - 接口 中的方法org.apache.spark.sql.connector.catalog.Table
Returns the schema of this table.
schema(StructType) - 类 中的方法org.apache.spark.sql.DataFrameReader
Specifies the input schema.
schema(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Specifies the schema by using the input DDL-formatted string.
schema() - 类 中的方法org.apache.spark.sql.Dataset
Returns the schema of this Dataset.
schema() - 接口 中的方法org.apache.spark.sql.Encoder
Returns the schema of encoding this type of object as a Row.
schema() - 接口 中的方法org.apache.spark.sql.Row
Schema for the row.
schema() - 类 中的方法org.apache.spark.sql.sources.BaseRelation
 
schema(StructType) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Specifies the input schema.
schema(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Specifies the schema by using the input DDL-formatted string.
schema_of_csv(String) - 类 中的静态方法org.apache.spark.sql.functions
Parses a CSV string and infers its schema in DDL format.
schema_of_csv(Column) - 类 中的静态方法org.apache.spark.sql.functions
Parses a CSV string and infers its schema in DDL format.
schema_of_csv(Column, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
Parses a CSV string and infers its schema in DDL format using options.
schema_of_json(String) - 类 中的静态方法org.apache.spark.sql.functions
Parses a JSON string and infers its schema in DDL format.
schema_of_json(Column) - 类 中的静态方法org.apache.spark.sql.functions
Parses a JSON string and infers its schema in DDL format.
schema_of_json(Column, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
Parses a JSON string and infers its schema in DDL format using options.
schemaLess() - 类 中的方法org.apache.spark.sql.hive.execution.HiveScriptIOSchema
 
SchemaRelationProvider - org.apache.spark.sql.sources中的接口
Implemented by objects that produce relations for a specific kind of data source with a given schema.
SchemaUtils - org.apache.spark.ml.util中的类
Utils for handling schemas.
SchemaUtils() - 类 的构造器org.apache.spark.ml.util.SchemaUtils
 
SchemaUtils - org.apache.spark.sql.util中的类
Utils for handling schemas.
SchemaUtils() - 类 的构造器org.apache.spark.sql.util.SchemaUtils
 
scope() - 类 中的方法org.apache.spark.storage.RDDInfo
 
scoreAndLabels() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
 
scoreLabelsWeight() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
 
scratch() - 类 中的方法org.apache.spark.mllib.optimization.NNLS.Workspace
 
script() - 类 中的方法org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
Scripts() - 接口 中的方法org.apache.spark.sql.hive.HiveStrategies
 
Scripts() - 类 的构造器org.apache.spark.sql.hive.HiveStrategies.Scripts
 
Scripts$() - 类 的构造器org.apache.spark.sql.hive.HiveStrategies.Scripts$
 
ScriptTransformationExec - org.apache.spark.sql.hive.execution中的类
Transforms the input by forking and running the specified script.
ScriptTransformationExec(Seq<Expression>, String, Seq<Attribute>, SparkPlan, HiveScriptIOSchema) - 类 的构造器org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
ScriptTransformationWriterThread - org.apache.spark.sql.hive.execution中的类
 
ScriptTransformationWriterThread(Iterator<InternalRow>, Seq<DataType>, org.apache.spark.sql.catalyst.expressions.Projection, AbstractSerDe, ObjectInspector, HiveScriptIOSchema, OutputStream, Process, org.apache.spark.util.CircularBuffer, TaskContext, Configuration) - 类 的构造器org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread
 
second(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the seconds as an integer from a given date/timestamp/string.
seconds() - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
seconds(long) - 类 中的静态方法org.apache.spark.streaming.Durations
 
Seconds - org.apache.spark.streaming中的类
Helper object that creates instance of Duration representing a given number of seconds.
Seconds() - 类 的构造器org.apache.spark.streaming.Seconds
 
securityManager() - 类 中的方法org.apache.spark.SparkEnv
 
securityManager() - 接口 中的方法org.apache.spark.status.api.v1.UIRoot
 
seed() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
seed() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
seed() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
seed() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
seed() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
seed() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
seed() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
seed() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
seed() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
seed() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
seed() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
seed() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
seed() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
seed() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
seed() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
seed() - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
seed() - 类 中的方法org.apache.spark.ml.feature.MinHashLSH
 
seed() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
seed() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
seed() - 接口 中的方法org.apache.spark.ml.param.shared.HasSeed
Param for random seed.
seed() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
seed() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
seed() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
seed() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
seed() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
seed() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
seed() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
seed() - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
seed() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
seed() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
seed() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
seedParam() - 类 中的静态方法org.apache.spark.ml.image.SamplePathFilter
 
select(Column...) - 类 中的方法org.apache.spark.sql.Dataset
Selects a set of column based expressions.
select(String, String...) - 类 中的方法org.apache.spark.sql.Dataset
Selects a set of columns.
select(Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Selects a set of column based expressions.
select(String, Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Selects a set of columns.
select(TypedColumn<T, U1>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by computing the given Column expression for each element.
select(TypedColumn<T, U1>, TypedColumn<T, U2>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by computing the given Column expressions for each element.
select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by computing the given Column expressions for each element.
select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>, TypedColumn<T, U4>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by computing the given Column expressions for each element.
select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>, TypedColumn<T, U4>, TypedColumn<T, U5>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by computing the given Column expressions for each element.
selectedFeatures() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
list of indices to select (filter).
selectedFeatures() - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelectorModel
 
selectExpr(String...) - 类 中的方法org.apache.spark.sql.Dataset
Selects a set of SQL expressions.
selectExpr(Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Selects a set of SQL expressions.
selectorType() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
selectorType() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
selectorType() - 接口 中的方法org.apache.spark.ml.feature.ChiSqSelectorParams
The selector type of the ChisqSelector.
selectorType() - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
self() - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
The RpcEndpointRef of this RpcEndpoint.
sendData(String, Seq<Object>) - 接口 中的方法org.apache.spark.streaming.kinesis.KinesisDataGenerator
Sends the data to Kinesis and returns the metadata for everything that has been sent.
sender() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
 
senderAddress() - 接口 中的方法org.apache.spark.rpc.RpcCallContext
The sender of this message.
sendFailure(Throwable) - 接口 中的方法org.apache.spark.rpc.RpcCallContext
Report a failure to the sender.
sendToDst(A) - 类 中的方法org.apache.spark.graphx.EdgeContext
Sends a message to the destination vertex.
sendToDst(A) - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
sendToSrc(A) - 类 中的方法org.apache.spark.graphx.EdgeContext
Sends a message to the source vertex.
sendToSrc(A) - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
sendWith(TransportClient) - 接口 中的方法org.apache.spark.rpc.netty.OutboxMessage
 
seqToString(Seq<T>, Function1<T, String>) - 类 中的静态方法org.apache.spark.internal.config.ConfigHelpers
 
sequence() - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
 
sequence(Column, Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Generate a sequence of integers from start to stop, incrementing by step.
sequence(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
Generate a sequence of integers from start to stop, incrementing by 1 if start is less than or equal to stop, otherwise -1.
sequenceCol() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
Param for the name of the sequence column in dataset (default "sequence"), rows with nulls in this column are ignored.
sequenceFile(String, Class<K>, Class<V>, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop SequenceFile with given key and value types.
sequenceFile(String, Class<K>, Class<V>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop SequenceFile.
sequenceFile(String, Class<K>, Class<V>, int) - 类 中的方法org.apache.spark.SparkContext
Get an RDD for a Hadoop SequenceFile with given key and value types.
sequenceFile(String, Class<K>, Class<V>) - 类 中的方法org.apache.spark.SparkContext
Get an RDD for a Hadoop SequenceFile with given key and value types.
sequenceFile(String, int, ClassTag<K>, ClassTag<V>, Function0<WritableConverter<K>>, Function0<WritableConverter<V>>) - 类 中的方法org.apache.spark.SparkContext
Version of sequenceFile() for types implicitly convertible to Writables through a WritableConverter.
SequenceFileRDDFunctions<K,V> - org.apache.spark.rdd中的类
Extra functions available on RDDs of (key, value) pairs to create a Hadoop SequenceFile, through an implicit conversion.
SequenceFileRDDFunctions(RDD<Tuple2<K, V>>, Class<? extends Writable>, Class<? extends Writable>, Function1<K, Writable>, ClassTag<K>, Function1<V, Writable>, ClassTag<V>) - 类 的构造器org.apache.spark.rdd.SequenceFileRDDFunctions
 
SER_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SerDe - org.apache.spark.api.r中的类
Utility functions to serialize, deserialize objects to / from R
SerDe() - 类 的构造器org.apache.spark.api.r.SerDe
 
SERDE() - 类 中的静态方法org.apache.spark.sql.hive.execution.HiveOptions
 
serde() - 类 中的方法org.apache.spark.sql.hive.execution.HiveOptions
 
serdeProperties() - 类 中的方法org.apache.spark.sql.hive.execution.HiveOptions
 
SerializableConfiguration - org.apache.spark.util中的类
Hadoop configuration but serializable.
SerializableConfiguration(Configuration) - 类 的构造器org.apache.spark.util.SerializableConfiguration
 
SerializableMapWrapper(Map<A, B>) - 类 的构造器org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
 
SerializableWritable<T extends org.apache.hadoop.io.Writable> - org.apache.spark中的类
 
SerializableWritable(T) - 类 的构造器org.apache.spark.SerializableWritable
 
SerializationDebugger - org.apache.spark.serializer中的类
 
SerializationDebugger() - 类 的构造器org.apache.spark.serializer.SerializationDebugger
 
SerializationDebugger.ObjectStreamClassMethods - org.apache.spark.serializer中的类
An implicit class that allows us to call private methods of ObjectStreamClass.
SerializationDebugger.ObjectStreamClassMethods$ - org.apache.spark.serializer中的类
 
SerializationFormats - org.apache.spark.api.r中的类
 
SerializationFormats() - 类 的构造器org.apache.spark.api.r.SerializationFormats
 
SerializationStream - org.apache.spark.serializer中的类
:: DeveloperApi :: A stream for writing serialized objects.
SerializationStream() - 类 的构造器org.apache.spark.serializer.SerializationStream
 
serializationStream() - 类 中的方法org.apache.spark.storage.memory.SerializedValuesHolder
 
serialize(Vector) - 类 中的方法org.apache.spark.mllib.linalg.VectorUDT
 
serialize(T, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.DummySerializerInstance
 
serialize(T, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.SerializerInstance
 
serialize(T) - 类 中的静态方法org.apache.spark.util.Utils
Serialize an object using Java serialization
SERIALIZED_R_DATA_SCHEMA() - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
serializedData() - 类 中的方法org.apache.spark.scheduler.local.StatusUpdate
 
serializedMapStatus(org.apache.spark.broadcast.BroadcastManager, boolean, int, SparkConf) - 类 中的方法org.apache.spark.ShuffleStatus
Serializes the mapStatuses array into an efficient compressed format.
SerializedMemoryEntry<T> - org.apache.spark.storage.memory中的类
 
SerializedMemoryEntry(org.apache.spark.util.io.ChunkedByteBuffer, MemoryMode, ClassTag<T>) - 类 的构造器org.apache.spark.storage.memory.SerializedMemoryEntry
 
SerializedValuesHolder<T> - org.apache.spark.storage.memory中的类
A holder for storing the serialized values.
SerializedValuesHolder(BlockId, int, ClassTag<T>, MemoryMode, org.apache.spark.serializer.SerializerManager) - 类 的构造器org.apache.spark.storage.memory.SerializedValuesHolder
 
Serializer - org.apache.spark.serializer中的类
:: DeveloperApi :: A serializer.
Serializer() - 类 的构造器org.apache.spark.serializer.Serializer
 
serializer() - 类 中的方法org.apache.spark.ShuffleDependency
 
serializer() - 类 中的方法org.apache.spark.SparkEnv
 
SerializerInstance - org.apache.spark.serializer中的类
:: DeveloperApi :: An instance of a serializer, for use by one thread at a time.
SerializerInstance() - 类 的构造器org.apache.spark.serializer.SerializerInstance
 
serializerManager() - 类 中的方法org.apache.spark.SparkEnv
 
serializeStream(OutputStream) - 类 中的方法org.apache.spark.serializer.DummySerializerInstance
 
serializeStream(OutputStream) - 类 中的方法org.apache.spark.serializer.SerializerInstance
 
serializeViaNestedStream(OutputStream, SerializerInstance, Function1<SerializationStream, BoxedUnit>) - 类 中的静态方法org.apache.spark.util.Utils
Serialize via nested stream using specific serializer
serviceName() - 接口 中的方法org.apache.spark.security.HadoopDelegationTokenProvider
Name of the service to provide delegation tokens.
servletContext() - 接口 中的方法org.apache.spark.status.api.v1.ApiRequestContext
 
ServletParams(Function1<HttpServletRequest, T>, String, Function1<T, String>) - 类 的构造器org.apache.spark.ui.JettyUtils.ServletParams
 
ServletParams$() - 类 的构造器org.apache.spark.ui.JettyUtils.ServletParams$
 
session(SparkSession) - 类 中的静态方法org.apache.spark.ml.r.RWrappers
 
session(SparkSession) - 接口 中的方法org.apache.spark.ml.util.BaseReadWrite
Sets the Spark Session to use for saving/loading.
session(SparkSession) - 类 中的方法org.apache.spark.ml.util.GeneralMLWriter
 
session(SparkSession) - 类 中的方法org.apache.spark.ml.util.MLReader
 
session(SparkSession) - 类 中的方法org.apache.spark.ml.util.MLWriter
 
sessionCatalog() - 类 中的方法org.apache.spark.sql.hive.RelationConversions
 
SessionConfigSupport - org.apache.spark.sql.connector.catalog中的接口
A mix-in interface for TableProvider.
sessionState() - 类 中的方法org.apache.spark.sql.SparkSession
 
set(long, long, int, int, VD, VD, ED) - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
Set() - 类 中的静态方法org.apache.spark.metrics.sink.StatsdMetricType
 
set(Param<T>, T) - 接口 中的方法org.apache.spark.ml.param.Params
Sets a parameter in the embedded param map.
set(String, Object) - 接口 中的方法org.apache.spark.ml.param.Params
Sets a parameter (by name) in the embedded param map.
set(ParamPair<?>) - 接口 中的方法org.apache.spark.ml.param.Params
Sets a parameter in the embedded param map.
set(String, long, long) - 类 中的静态方法org.apache.spark.rdd.InputFileBlockHolder
Sets the thread-local input block.
set(String, String) - 类 中的方法org.apache.spark.SparkConf
Set a configuration variable.
set(SparkEnv) - 类 中的静态方法org.apache.spark.SparkEnv
 
set(String, String) - 类 中的方法org.apache.spark.sql.RuntimeConfig
Sets the given Spark runtime configuration property.
set(String, boolean) - 类 中的方法org.apache.spark.sql.RuntimeConfig
Sets the given Spark runtime configuration property.
set(String, long) - 类 中的方法org.apache.spark.sql.RuntimeConfig
Sets the given Spark runtime configuration property.
set(long) - 类 中的方法org.apache.spark.sql.types.Decimal
Set this Decimal to the given Long.
set(int) - 类 中的方法org.apache.spark.sql.types.Decimal
Set this Decimal to the given Int.
set(long, int, int) - 类 中的方法org.apache.spark.sql.types.Decimal
Set this Decimal to the given unscaled Long, with a given precision and scale.
set(BigDecimal, int, int) - 类 中的方法org.apache.spark.sql.types.Decimal
Set this Decimal to the given BigDecimal value, with a given precision and scale.
set(BigDecimal) - 类 中的方法org.apache.spark.sql.types.Decimal
Set this Decimal to the given BigDecimal value, inheriting its precision and scale.
set(BigInteger) - 类 中的方法org.apache.spark.sql.types.Decimal
If the value is not in the range of long, convert it to BigDecimal and the precision and scale are based on the converted value.
set(Decimal) - 类 中的方法org.apache.spark.sql.types.Decimal
Set this Decimal to the given Decimal value.
setActiveSession(SparkSession) - 类 中的静态方法org.apache.spark.sql.SparkSession
Changes the SparkSession that will be returned in this thread and its children when SparkSession.getOrCreate() is called.
setAggregationDepth(int) - 类 中的方法org.apache.spark.ml.classification.LinearSVC
Suggested depth for treeAggregate (greater than or equal to 2).
setAggregationDepth(int) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Suggested depth for treeAggregate (greater than or equal to 2).
setAggregationDepth(int) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
Suggested depth for treeAggregate (greater than or equal to 2).
setAggregationDepth(int) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Suggested depth for treeAggregate (greater than or equal to 2).
setAggregator(Aggregator<K, V, C>) - 类 中的方法org.apache.spark.rdd.ShuffledRDD
Set aggregator for RDD's shuffle.
setAlgo(String) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
Sets Algorithm using a String.
setAlgo(Enumeration.Value) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setAll(Iterable<Tuple2<String, String>>) - 类 中的方法org.apache.spark.SparkConf
Set multiple parameters together
setAll(Traversable<Tuple2<String, String>>) - 类 中的方法org.apache.spark.SparkConf
已过时。
Use setAll(Iterable) instead. Since 3.0.0.
setAlpha(double) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setAlpha(Vector) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Alias for setDocConcentration()
setAlpha(double) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Alias for setDocConcentration()
setAlpha(double) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Sets the constant used in computing confidence in implicit ALS.
setAppName(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Set the application name.
setAppName(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
setAppName(String) - 类 中的方法org.apache.spark.SparkConf
Set a name for your application.
setAppResource(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Set the main application resource.
setAppResource(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
setBandwidth(double) - 类 中的方法org.apache.spark.mllib.stat.KernelDensity
Sets the bandwidth (standard deviation) of the Gaussian kernel (default: 1.0).
setBeta(double) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
setBeta(double) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Alias for setTopicConcentration()
setBinary(boolean) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
setBinary(boolean) - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
setBinary(boolean) - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
setBinary(boolean) - 类 中的方法org.apache.spark.mllib.feature.HashingTF
If true, term frequency vector will be binary such that non-zero term counts will be set to 1 (default: false)
setBlocks(int) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Set the number of blocks for both user blocks and product blocks to parallelize the computation into; pass -1 for an auto-configured number of blocks.
setBlockSize(int) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
Sets the value of param blockSize.
setBucketLength(double) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
setCacheNodeIds(boolean) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
setCacheNodeIds(boolean) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setCacheNodeIds(boolean) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setCacheNodeIds(boolean) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setCacheNodeIds(boolean) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setCacheNodeIds(boolean) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setCallSite(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Pass-through to SparkContext.setCallSite.
setCallSite(String) - 类 中的方法org.apache.spark.SparkContext
Set the thread-local property for overriding the call sites of actions and RDDs.
setCaseSensitive(boolean) - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
setCategoricalCols(String[]) - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
setCategoricalFeaturesInfo(Map<Integer, Integer>) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
Sets categoricalFeaturesInfo using a Java Map.
setCategoricalFeaturesInfo(Map<Object, Object>) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setCensorCol(String) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
setCheckpointDir(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Set the directory under which RDDs are going to be checkpointed.
setCheckpointDir(String) - 类 中的方法org.apache.spark.SparkContext
Set the directory under which RDDs are going to be checkpointed.
setCheckpointInterval(int) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
Specifies how often to checkpoint the cached node IDs.
setCheckpointInterval(int) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
Specifies how often to checkpoint the cached node IDs.
setCheckpointInterval(int) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
Specifies how often to checkpoint the cached node IDs.
setCheckpointInterval(int) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setCheckpointInterval(int) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setCheckpointInterval(int) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
Specifies how often to checkpoint the cached node IDs.
setCheckpointInterval(int) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
Specifies how often to checkpoint the cached node IDs.
setCheckpointInterval(int) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
Specifies how often to checkpoint the cached node IDs.
setCheckpointInterval(int) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Parameter for set checkpoint interval (greater than or equal to 1) or disable checkpoint (-1).
setCheckpointInterval(int) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
:: DeveloperApi :: Set period (in iterations) between checkpoints (default = 10).
setCheckpointInterval(int) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setClassifier(Classifier<?, ?, ?>) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
setColdStartStrategy(String) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setColdStartStrategy(String) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
setCollectSubModels(boolean) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
Whether to collect submodels when fitting.
setCollectSubModels(boolean) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
Whether to collect submodels when fitting.
setConf(Configuration) - 接口 中的方法org.apache.spark.input.Configurable
 
setConf(String, String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Set a single configuration value for the application.
setConf(String, String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
setConf(Configuration) - 类 中的方法org.apache.spark.ml.image.SamplePathFilter
 
setConf(Properties) - 类 中的方法org.apache.spark.sql.SQLContext
Set Spark SQL configuration properties.
setConf(String, String) - 类 中的方法org.apache.spark.sql.SQLContext
Set the given Spark SQL configuration property.
setConfig(String, String) - 类 中的静态方法org.apache.spark.launcher.SparkLauncher
Set a configuration value for the launcher library.
setConvergenceTol(double) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Set the largest change in log-likelihood at which convergence is considered to have occurred.
setConvergenceTol(double) - 类 中的方法org.apache.spark.mllib.optimization.GradientDescent
Set the convergence tolerance.
setConvergenceTol(double) - 类 中的方法org.apache.spark.mllib.optimization.LBFGS
Set the convergence tolerance of iterations for L-BFGS.
setConvergenceTol(double) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
Set the convergence tolerance.
setCurrentDatabase(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Sets the current default database in this session.
setCurrentDatabase(String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Sets the name of current database.
setCustomHostname(String) - 类 中的静态方法org.apache.spark.util.Utils
Allow setting a custom host name because when we run on Mesos we need to use the same hostname it reports to the master.
setDAGScheduler(DAGScheduler) - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
setDecayFactor(double) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Set the forgetfulness of the previous centroids.
setDefault(Param<T>, T) - 接口 中的方法org.apache.spark.ml.param.Params
Sets a default value for a param.
setDefault(Seq<ParamPair<?>>) - 接口 中的方法org.apache.spark.ml.param.Params
Sets default values for a list of params.
setDefaultClassLoader(ClassLoader) - 类 中的方法org.apache.spark.serializer.KryoSerializer
 
setDefaultClassLoader(ClassLoader) - 类 中的方法org.apache.spark.serializer.Serializer
Sets a class loader for the serializer to use in deserialization.
setDefaultSession(SparkSession) - 类 中的静态方法org.apache.spark.sql.SparkSession
Sets the default SparkSession that is returned by the builder.
setDegree(int) - 类 中的方法org.apache.spark.ml.feature.PolynomialExpansion
 
setDelegateCatalog(CatalogPlugin) - 接口 中的方法org.apache.spark.sql.connector.catalog.CatalogExtension
This will be called only once by Spark to pass in the Spark built-in session catalog, after CatalogPlugin.initialize(String, CaseInsensitiveStringMap) is called.
setDelegateCatalog(CatalogPlugin) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
setDeployMode(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Set the deploy mode for the application.
setDeployMode(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
setDistanceMeasure(String) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
setDistanceMeasure(String) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
setDistanceMeasure(String) - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
setDistanceMeasure(String) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Set the distance suite used by the algorithm.
setDistanceMeasure(String) - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Set the distance suite used by the algorithm.
setDocConcentration(double[]) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setDocConcentration(double) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setDocConcentration(Vector) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
setDocConcentration(double) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Replicates a Double docConcentration to create a symmetric prior.
setDropLast(boolean) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
setDropLast(boolean) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
setDstCol(String) - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
setElasticNetParam(double) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Set the ElasticNet mixing parameter.
setElasticNetParam(double) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Set the ElasticNet mixing parameter.
setEps(double) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
setEpsilon(double) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Sets the value of param epsilon.
setEpsilon(double) - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Set the distance threshold within which we've consider centers to have converged.
setError(PrintStream) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
 
setEstimator(Estimator<?>) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
setEstimator(Estimator<?>) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
setEstimatorParamMaps(ParamMap[]) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
setEstimatorParamMaps(ParamMap[]) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
setEvaluator(Evaluator) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
setEvaluator(Evaluator) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
setExecutorEnv(String, String) - 类 中的方法org.apache.spark.SparkConf
Set an environment variable to be used when launching executors for this application.
setExecutorEnv(Seq<Tuple2<String, String>>) - 类 中的方法org.apache.spark.SparkConf
Set multiple environment variables to be used when launching executors.
setExecutorEnv(Tuple2<String, String>[]) - 类 中的方法org.apache.spark.SparkConf
Set multiple environment variables to be used when launching executors.
setFamily(String) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Sets the value of param family.
setFamily(String) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the value of param family.
setFdr(double) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
setFdr(double) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
setFeatureIndex(int) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
setFeatureIndex(int) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.clustering.LDA
The features for LDA should be a Vector representing the word counts in a document.
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.clustering.LDAModel
The features for LDA should be a Vector representing the word counts in a document.
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.feature.RFormula
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.PredictionModel
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.Predictor
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
setFeaturesCol(String) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
setFeatureSubsetStrategy(String) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setFeatureSubsetStrategy(String) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setFeatureSubsetStrategy(String) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setFeatureSubsetStrategy(String) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setFinalRDDStorageLevel(StorageLevel) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
:: DeveloperApi :: Sets storage level for final RDDs (user/product used in MatrixFactorizationModel).
setFinalStorageLevel(String) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setFitIntercept(boolean) - 类 中的方法org.apache.spark.ml.classification.LinearSVC
Whether to fit an intercept term.
setFitIntercept(boolean) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Whether to fit an intercept term.
setFitIntercept(boolean) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
Set if we should fit the intercept Default is true.
setFitIntercept(boolean) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets if we should fit the intercept.
setFitIntercept(boolean) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Set if we should fit the intercept.
setForceIndexLabel(boolean) - 类 中的方法org.apache.spark.ml.feature.RFormula
 
setFormula(String) - 类 中的方法org.apache.spark.ml.feature.RFormula
Sets the formula to use for this transformer.
setFpr(double) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
setFpr(double) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
setFwe(double) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
setFwe(double) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
setGaps(boolean) - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
setGradient(Gradient) - 类 中的方法org.apache.spark.mllib.optimization.GradientDescent
Set the gradient function (of the loss function of one single data example) to be used for SGD.
setGradient(Gradient) - 类 中的方法org.apache.spark.mllib.optimization.LBFGS
Set the gradient function (of the loss function of one single data example) to be used for L-BFGS.
setHalfLife(double, String) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Set the half life and time unit ("batches" or "points").
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.RFormula
 
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
 
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
setHandleInvalid(String) - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
 
setHashAlgorithm(String) - 类 中的方法org.apache.spark.mllib.feature.HashingTF
Set the hash algorithm used when mapping term to integer.
setIfMissing(String, String) - 类 中的方法org.apache.spark.SparkConf
Set a parameter if it isn't already configured
setImplicitPrefs(boolean) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setImplicitPrefs(boolean) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Sets whether to use implicit preference.
setImpurity(String) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
setImpurity(String) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
The impurity setting is ignored for GBT models.
setImpurity(String) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setImpurity(String) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setImpurity(String) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
The impurity setting is ignored for GBT models.
setImpurity(String) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setImpurity(Impurity) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setIndices(int[]) - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
setInfo(PrintStream) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
 
setInitialCenters(Vector[], double[]) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Specify initial centers directly.
setInitializationMode(String) - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Set the initialization algorithm.
setInitializationMode(String) - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClustering
Set the initialization mode.
setInitializationSteps(int) - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Set the number of steps for the k-means|| initialization mode.
setInitialModel(GaussianMixtureModel) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Set the initial GMM starting point, bypassing the random initialization.
setInitialModel(KMeansModel) - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Set the initial starting point, bypassing the random initialization or k-means|| The condition model.k == this.k must be met, failure results in an IllegalArgumentException.
setInitialWeights(Vector) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
Sets the value of param initialWeights.
setInitialWeights(Vector) - 类 中的方法org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
Set the initial weights.
setInitialWeights(Vector) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
Set the initial weights.
setInitMode(String) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
setInitMode(String) - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
setInitSteps(int) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.IDF
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.Imputer
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScaler
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.MinHashLSH
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.MinHashLSHModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.PCA
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
setInputCol(String) - 类 中的方法org.apache.spark.ml.UnaryTransformer
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
setInputCols(Seq<String>) - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.Imputer
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.Interaction
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
setInputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
 
setIntercept(boolean) - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
Set if the algorithm should add an intercept.
setIntermediateRDDStorageLevel(StorageLevel) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
:: DeveloperApi :: Sets storage level for intermediate RDDs (user/product in/out links).
setIntermediateStorageLevel(String) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setInverse(boolean) - 类 中的方法org.apache.spark.ml.feature.DCT
 
setIsotonic(boolean) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
setIsotonic(boolean) - 类 中的方法org.apache.spark.mllib.regression.IsotonicRegression
Sets the isotonic parameter.
setItemCol(String) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setItemCol(String) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
setItemsCol(String) - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
setItemsCol(String) - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
setIterations(int) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Set the number of iterations to run.
setJars(Seq<String>) - 类 中的方法org.apache.spark.SparkConf
Set JAR files to distribute to the cluster.
setJars(String[]) - 类 中的方法org.apache.spark.SparkConf
Set JAR files to distribute to the cluster.
setJavaHome(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
Set a custom JAVA_HOME for launching the Spark application.
setJobDescription(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Set a human readable description of the current job.
setJobDescription(String) - 类 中的方法org.apache.spark.SparkContext
Set a human readable description of the current job.
setJobGroup(String, String, boolean) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
setJobGroup(String, String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
setJobGroup(String, String, boolean) - 类 中的方法org.apache.spark.SparkContext
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
setK(int) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
setK(int) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
setK(int) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
setK(int) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setK(int) - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
setK(int) - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
setK(int) - 类 中的方法org.apache.spark.ml.feature.PCA
 
setK(int) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Sets the desired number of leaf clusters (default: 4).
setK(int) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Set the number of Gaussians in the mixture model.
setK(int) - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Set the number of clusters to create (k).
setK(int) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Set the number of topics to infer, i.e., the number of soft cluster centers.
setK(int) - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClustering
Set the number of clusters.
setK(int) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Set the number of clusters.
setKappa(double) - 类 中的方法org.apache.spark.mllib.clustering.OnlineLDAOptimizer
Learning rate: exponential decay rate---should be between (0.5, 1.0] to guarantee asymptotic convergence.
setKeepLastCheckpoint(boolean) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setKeepLastCheckpoint(boolean) - 类 中的方法org.apache.spark.mllib.clustering.EMLDAOptimizer
If using checkpointing, this indicates whether to keep the last checkpoint (vs clean up).
setKeyOrdering(Ordering<K>) - 类 中的方法org.apache.spark.rdd.ShuffledRDD
Set key ordering for RDD's shuffle.
setLabelCol(String) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.feature.RFormula
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.Predictor
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
setLabelCol(String) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
setLabels(String[]) - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
setLambda(double) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayes
Set the smoothing parameter.
setLambda(double) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Set the regularization parameter, lambda.
setLayers(int[]) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
Sets the value of param layers.
setLeafCol(String) - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeParams
 
setLearningDecay(double) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setLearningOffset(double) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setLearningRate(double) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Sets initial learning rate (default: 0.025).
setLearningRate(double) - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
setLink(String) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the value of param link.
setLinkPower(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the value of param linkPower.
setLinkPredictionCol(String) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the link prediction (linear predictor) column name.
setLinkPredictionCol(String) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
Sets the link prediction (linear predictor) column name.
setLocale(String) - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
setLocalProperty(String, String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Set a local property that affects jobs submitted from this thread, and all child threads, such as the Spark fair scheduler pool.
setLocalProperty(String, String) - 类 中的方法org.apache.spark.SparkContext
Set a local property that affects jobs submitted from this thread, such as the Spark fair scheduler pool.
setLogLevel(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Control our logLevel.
setLogLevel(String) - 类 中的方法org.apache.spark.SparkContext
Control our logLevel.
setLogLevel(Level) - 类 中的静态方法org.apache.spark.util.Utils
configure a new log4j level
setLoss(String) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Sets the value of param loss.
setLoss(Loss) - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
setLossType(String) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setLossType(String) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setLower(double) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
setLowerBoundsOnCoefficients(Matrix) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Set the lower bounds on coefficients if fitting under bound constrained optimization.
setLowerBoundsOnIntercepts(Vector) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Set the lower bounds on intercepts if fitting under bound constrained optimization.
setMainClass(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Sets the application class name for Java/Scala applications.
setMainClass(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
setMapSideCombine(boolean) - 类 中的方法org.apache.spark.rdd.ShuffledRDD
Set mapSideCombine flag for RDD's shuffle.
setMaster(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Set the Spark master for the application.
setMaster(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
setMaster(String) - 类 中的方法org.apache.spark.SparkConf
The master URL to connect to, such as "local" to run locally with one thread, "local[4]" to run locally with 4 cores, or "spark://master:7077" to run on a Spark standalone cluster.
setMax(double) - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
setMax(double) - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
setMaxBins(int) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
setMaxBins(int) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setMaxBins(int) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setMaxBins(int) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setMaxBins(int) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setMaxBins(int) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setMaxBins(int) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setMaxCategories(int) - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
setMaxDepth(int) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
setMaxDepth(int) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setMaxDepth(int) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setMaxDepth(int) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setMaxDepth(int) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setMaxDepth(int) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setMaxDepth(int) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setMaxDF(double) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.classification.LinearSVC
Set the maximum number of iterations.
setMaxIter(int) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Set the maximum number of iterations.
setMaxIter(int) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
Set the maximum number of iterations.
setMaxIter(int) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
Set the maximum number of iterations.
setMaxIter(int) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setMaxIter(int) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the maximum number of iterations (applicable for solver "irls").
setMaxIter(int) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Set the maximum number of iterations.
setMaxIterations(int) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Sets the max number of k-means iterations to split clusters (default: 20).
setMaxIterations(int) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Set the maximum number of iterations allowed.
setMaxIterations(int) - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Set maximum number of iterations allowed.
setMaxIterations(int) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Set the maximum number of iterations allowed.
setMaxIterations(int) - 类 中的方法org.apache.spark.mllib.clustering.PowerIterationClustering
Set maximum number of iterations of the power iteration loop
setMaxLocalProjDBSize(long) - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
setMaxLocalProjDBSize(long) - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan
Sets the maximum number of items (including delimiters used in the internal storage format) allowed in a projected database before local processing (default: 32000000L).
setMaxMemoryInMB(int) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
setMaxMemoryInMB(int) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setMaxMemoryInMB(int) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setMaxMemoryInMB(int) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setMaxMemoryInMB(int) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setMaxMemoryInMB(int) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setMaxMemoryInMB(int) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setMaxPatternLength(int) - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
setMaxPatternLength(int) - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan
Sets maximal pattern length (default: 10).
setMaxSentenceLength(int) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setMaxSentenceLength(int) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Sets the maximum length (in words) of each sentence in the input data.
setMetricLabel(double) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
setMetricLabel(double) - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
setMetricName(String) - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
setMetricName(String) - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
setMetricName(String) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
setMetricName(String) - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
setMetricName(String) - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
setMetricName(String) - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
setMin(double) - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
setMin(double) - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
setMinConfidence(double) - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
setMinConfidence(double) - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
setMinConfidence(double) - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules
Sets the minimal confidence (default: 0.8).
setMinCount(int) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setMinCount(int) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Sets minCount, the minimum number of times a token must appear to be included in the word2vec model's vocabulary (default: 5).
setMinDF(double) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
setMinDivisibleClusterSize(double) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
setMinDivisibleClusterSize(double) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Sets the minimum number of points (if greater than or equal to 1.0) or the minimum proportion of points (if less than 1.0) of a divisible cluster (default: 1).
setMinDocFreq(int) - 类 中的方法org.apache.spark.ml.feature.IDF
 
setMiniBatchFraction(double) - 类 中的方法org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
Set the fraction of each batch to use for updates.
setMiniBatchFraction(double) - 类 中的方法org.apache.spark.mllib.clustering.OnlineLDAOptimizer
Mini-batch fraction in (0, 1], which sets the fraction of document sampled and used in each iteration.
setMiniBatchFraction(double) - 类 中的方法org.apache.spark.mllib.optimization.GradientDescent
Set fraction of data to be used for each SGD iteration.
setMiniBatchFraction(double) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
Set the fraction of each batch to use for updates.
setMinInfoGain(double) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
setMinInfoGain(double) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setMinInfoGain(double) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setMinInfoGain(double) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setMinInfoGain(double) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setMinInfoGain(double) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setMinInfoGain(double) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setMinInstancesPerNode(int) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
setMinInstancesPerNode(int) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setMinInstancesPerNode(int) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setMinInstancesPerNode(int) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setMinInstancesPerNode(int) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setMinInstancesPerNode(int) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setMinInstancesPerNode(int) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setMinSupport(double) - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
setMinSupport(double) - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
setMinSupport(double) - 类 中的方法org.apache.spark.mllib.fpm.FPGrowth
Sets the minimal support level (default: 0.3).
setMinSupport(double) - 类 中的方法org.apache.spark.mllib.fpm.PrefixSpan
Sets the minimal support level (default: 0.1).
setMinTF(double) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
setMinTF(double) - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
setMinTokenLength(int) - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
setMinWeightFractionPerNode(double) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
setMinWeightFractionPerNode(double) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setMinWeightFractionPerNode(double) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setMinWeightFractionPerNode(double) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setMinWeightFractionPerNode(double) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setMissingValue(double) - 类 中的方法org.apache.spark.ml.feature.Imputer
 
setModelType(String) - 类 中的方法org.apache.spark.ml.classification.NaiveBayes
Set the model type using a string (case-sensitive).
setModelType(String) - 类 中的方法org.apache.spark.mllib.classification.NaiveBayes
Set the model type using a string (case-sensitive).
setN(int) - 类 中的方法org.apache.spark.ml.feature.NGram
 
setName(String) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Assign a name to this RDD
setName(String) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Assign a name to this RDD
setName(String) - 类 中的方法org.apache.spark.api.java.JavaRDD
Assign a name to this RDD
setName(String) - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
setName(String) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
setName(String) - 类 中的方法org.apache.spark.rdd.RDD
Assign a name to this RDD
setNames(String[]) - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
setNonnegative(boolean) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setNonnegative(boolean) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Set whether the least-squares problems solved at each iteration should have nonnegativity constraints.
setNullAt(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
setNullAt(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
setNumBins(int) - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
setNumBlocks(int) - 类 中的方法org.apache.spark.ml.recommendation.ALS
Sets both numUserBlocks and numItemBlocks to the specific value.
setNumBuckets(int) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
setNumBucketsArray(int[]) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
setNumClasses(int) - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
Set the number of possible outcomes for k classes classification problem in Multinomial Logistic Regression.
setNumClasses(int) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setNumCorrections(int) - 类 中的方法org.apache.spark.mllib.optimization.LBFGS
Set the number of corrections used in the LBFGS update.
setNumFeatures(int) - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
setNumFeatures(int) - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
setNumFolds(int) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
setNumHashTables(int) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
setNumHashTables(int) - 类 中的方法org.apache.spark.ml.feature.MinHashLSH
 
setNumItemBlocks(int) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setNumIterations(int) - 类 中的方法org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
Set the number of iterations of gradient descent to run per update.
setNumIterations(int) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Sets number of iterations (default: 1), which should be smaller than or equal to number of partitions.
setNumIterations(int) - 类 中的方法org.apache.spark.mllib.optimization.GradientDescent
Set the number of iterations for SGD.
setNumIterations(int) - 类 中的方法org.apache.spark.mllib.optimization.LBFGS
Set the maximal number of iterations for L-BFGS.
setNumIterations(int) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
Set the number of iterations of gradient descent to run per update.
setNumIterations(int) - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
setNumPartitions(int) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setNumPartitions(int) - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
setNumPartitions(int) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Sets number of partitions (default: 1).
setNumPartitions(int) - 类 中的方法org.apache.spark.mllib.fpm.FPGrowth
Sets the number of partitions used by parallel FP-growth (default: same as input data).
setNumRows(int) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarBatch
Sets the number of rows in this batch.
setNumTopFeatures(int) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
setNumTopFeatures(int) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
setNumTrees(int) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setNumTrees(int) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setNumUserBlocks(int) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setOffsetCol(String) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the value of param offsetCol.
setOptimizeDocConcentration(boolean) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setOptimizeDocConcentration(boolean) - 类 中的方法org.apache.spark.mllib.clustering.OnlineLDAOptimizer
Sets whether to optimize docConcentration parameter during training.
setOptimizer(String) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setOptimizer(LDAOptimizer) - 类 中的方法org.apache.spark.mllib.clustering.LDA
:: DeveloperApi :: LDAOptimizer used to perform the actual calculation (default = EMLDAOptimizer)
setOptimizer(String) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Set the LDAOptimizer used to perform the actual calculation by algorithm name.
setOrNull(long, int, int) - 类 中的方法org.apache.spark.sql.types.Decimal
Set this Decimal to the given unscaled Long, with a given precision and scale, and return it, or return null if it cannot be set due to overflow.
setOut(PrintStream) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.IDF
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.Imputer
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.Interaction
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScaler
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.MinHashLSH
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.MinHashLSHModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.PCA
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
setOutputCol(String) - 类 中的方法org.apache.spark.ml.UnaryTransformer
 
setOutputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
setOutputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
setOutputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.Imputer
 
setOutputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
setOutputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
setOutputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
setOutputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
setOutputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
setOutputCols(String[]) - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
setP(double) - 类 中的方法org.apache.spark.ml.feature.Normalizer
 
setParallelism(int) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
The implementation of parallel one vs. rest runs the classification for each class in a separate threads.
setParallelism(int) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
Set the maximum level of parallelism to evaluate models in parallel.
setParallelism(int) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
Set the maximum level of parallelism to evaluate models in parallel.
setParent(Estimator<M>) - 类 中的方法org.apache.spark.ml.Model
Sets the parent of this model (Java API).
setPattern(String) - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
setPeacePeriod(int) - 类 中的方法org.apache.spark.mllib.stat.test.StreamingTest
Set the number of initial batches to ignore.
setPercentile(double) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
setPercentile(double) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.PredictionModel
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.Predictor
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
setPredictionCol(String) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
setProbabilityCol(String) - 类 中的方法org.apache.spark.ml.classification.ProbabilisticClassificationModel
 
setProbabilityCol(String) - 类 中的方法org.apache.spark.ml.classification.ProbabilisticClassifier
 
setProbabilityCol(String) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
setProbabilityCol(String) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
setProbabilityCol(String) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
setProductBlocks(int) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Set the number of product blocks to parallelize the computation.
setPropertiesFile(String) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Set a custom properties file with Spark configuration for the application.
setPropertiesFile(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
setProperty(String, String) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.NamespaceChange
Create a NamespaceChange for setting a namespace property.
setProperty(String, String) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for setting a table property.
setQuantileCalculationStrategy(Enumeration.Value) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setQuantileProbabilities(double[]) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
setQuantileProbabilities(double[]) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
setQuantilesCol(String) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
setQuantilesCol(String) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
setRandomCenters(int, double, long) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Initialize random centers, requiring only the number of dimensions.
setRank(int) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setRank(int) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Set the rank of the feature matrices computed (number of features).
setRatingCol(String) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setRawPredictionCol(String) - 类 中的方法org.apache.spark.ml.classification.ClassificationModel
 
setRawPredictionCol(String) - 类 中的方法org.apache.spark.ml.classification.Classifier
 
setRawPredictionCol(String) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
setRawPredictionCol(String) - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
setRawPredictionCol(String) - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
setRegParam(double) - 类 中的方法org.apache.spark.ml.classification.LinearSVC
Set the regularization parameter.
setRegParam(double) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Set the regularization parameter.
setRegParam(double) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setRegParam(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the regularization parameter for L2 regularization.
setRegParam(double) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Set the regularization parameter.
setRegParam(double) - 类 中的方法org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
Set the regularization parameter.
setRegParam(double) - 类 中的方法org.apache.spark.mllib.optimization.GradientDescent
Set the regularization parameter.
setRegParam(double) - 类 中的方法org.apache.spark.mllib.optimization.LBFGS
Set the regularization parameter.
setRegParam(double) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
Set the regularization parameter.
setRelativeError(double) - 类 中的方法org.apache.spark.ml.feature.Imputer
 
setRelativeError(double) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
setRelativeError(double) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
setRequiredColumns(Configuration, StructType, StructType) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
setRest(long, int, VD, ED) - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
setSample(RDD<Object>) - 类 中的方法org.apache.spark.mllib.stat.KernelDensity
Sets the sample to use for density estimation.
setSample(JavaRDD<Double>) - 类 中的方法org.apache.spark.mllib.stat.KernelDensity
Sets the sample to use for density estimation (for Java users).
setScalingVec(Vector) - 类 中的方法org.apache.spark.ml.feature.ElementwiseProduct
 
setSeed(long) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
setSeed(long) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setSeed(long) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
Set the seed for weights initialization if weights are not set
setSeed(long) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setSeed(long) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
setSeed(long) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
setSeed(long) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
setSeed(long) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setSeed(long) - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
setSeed(long) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
setSeed(long) - 类 中的方法org.apache.spark.ml.feature.MinHashLSH
 
setSeed(long) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setSeed(long) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setSeed(long) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setSeed(long) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setSeed(long) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setSeed(long) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
setSeed(long) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
setSeed(long) - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeans
Sets the random seed (default: hash value of the class name).
setSeed(long) - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixture
Set the random seed
setSeed(long) - 类 中的方法org.apache.spark.mllib.clustering.KMeans
Set the random seed for cluster initialization.
setSeed(long) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Set the random seed for cluster initialization.
setSeed(long) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
Set the random seed for cluster initialization.
setSeed(long) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Sets random seed (default: a random long integer).
setSeed(long) - 类 中的方法org.apache.spark.mllib.random.ExponentialGenerator
 
setSeed(long) - 类 中的方法org.apache.spark.mllib.random.GammaGenerator
 
setSeed(long) - 类 中的方法org.apache.spark.mllib.random.LogNormalGenerator
 
setSeed(long) - 类 中的方法org.apache.spark.mllib.random.PoissonGenerator
 
setSeed(long) - 类 中的方法org.apache.spark.mllib.random.StandardNormalGenerator
 
setSeed(long) - 类 中的方法org.apache.spark.mllib.random.UniformGenerator
 
setSeed(long) - 类 中的方法org.apache.spark.mllib.random.WeibullGenerator
 
setSeed(long) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Sets a random seed to have deterministic results.
setSeed(long) - 类 中的方法org.apache.spark.util.random.BernoulliCellSampler
 
setSeed(long) - 类 中的方法org.apache.spark.util.random.BernoulliSampler
 
setSeed(long) - 类 中的方法org.apache.spark.util.random.PoissonSampler
 
setSeed(long) - 接口 中的方法org.apache.spark.util.random.Pseudorandom
Set random seed.
setSelectorType(String) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
setSelectorType(String) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelector
 
setSequenceCol(String) - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
setSerializer(Serializer) - 类 中的方法org.apache.spark.rdd.CoGroupedRDD
Set a serializer for this RDD's shuffle, or null to use the default (spark.serializer)
setSerializer(Serializer) - 类 中的方法org.apache.spark.rdd.ShuffledRDD
Set a serializer for this RDD's shuffle, or null to use the default (spark.serializer)
setSize(int) - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
 
setSmoothing(double) - 类 中的方法org.apache.spark.ml.classification.NaiveBayes
Set the smoothing parameter.
setSolver(String) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
Sets the value of param solver.
setSolver(String) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the solver algorithm used for optimization.
setSolver(String) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Set the solver algorithm used for optimization.
setSparkContextSessionConf(SparkSession, Map<Object, Object>) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
setSparkHome(String) - 类 中的方法org.apache.spark.launcher.SparkLauncher
Set a custom Spark installation location for the application.
setSparkHome(String) - 类 中的方法org.apache.spark.SparkConf
Set the location where Spark is installed on worker nodes.
setSplits(double[]) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
setSplitsArray(double[][]) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
setSQLReadObject(Function2<DataInputStream, Object, Object>) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
setSQLWriteObject(Function2<DataOutputStream, Object, Object>) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
setSrcCol(String) - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
setSrcOnly(long, int, VD) - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
setStages(PipelineStage[]) - 类 中的方法org.apache.spark.ml.Pipeline
 
setStandardization(boolean) - 类 中的方法org.apache.spark.ml.classification.LinearSVC
Whether to standardize the training features before fitting the model.
setStandardization(boolean) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Whether to standardize the training features before fitting the model.
setStandardization(boolean) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Whether to standardize the training features before fitting the model.
setStatement(String) - 类 中的方法org.apache.spark.ml.feature.SQLTransformer
 
setStepSize(double) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setStepSize(double) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
Sets the value of param stepSize (applicable only for solver "gd").
setStepSize(double) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setStepSize(double) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setStepSize(double) - 类 中的方法org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
Set the step size for gradient descent.
setStepSize(double) - 类 中的方法org.apache.spark.mllib.optimization.GradientDescent
Set the initial step size of SGD for the first step.
setStepSize(double) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
Set the step size for gradient descent.
setStopWords(String[]) - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
setStrategy(String) - 类 中的方法org.apache.spark.ml.feature.Imputer
Imputation strategy.
setStringIndexerOrderType(String) - 类 中的方法org.apache.spark.ml.feature.RFormula
 
setStringOrderType(String) - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
setSubsamplingRate(double) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setSubsamplingRate(double) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
setSubsamplingRate(double) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setSubsamplingRate(double) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setSubsamplingRate(double) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
setSubsamplingRate(double) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setSummary(Option<T>) - 接口 中的方法org.apache.spark.ml.util.HasTrainingSummary
 
setTau0(double) - 类 中的方法org.apache.spark.mllib.clustering.OnlineLDAOptimizer
A (positive) learning parameter that downweights early iterations.
setTestMethod(String) - 类 中的方法org.apache.spark.mllib.stat.test.StreamingTest
Set the statistical method used for significance testing.
setThreshold(double) - 类 中的方法org.apache.spark.ml.classification.LinearSVC
Set threshold in binary classification.
setThreshold(double) - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
setThreshold(double) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
setThreshold(double) - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
setThreshold(double) - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
Set threshold in binary classification, in range [0, 1].
setThreshold(double) - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
setThreshold(double) - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionModel
Sets the threshold that separates positive predictions from negative predictions in Binary Logistic Regression.
setThreshold(double) - 类 中的方法org.apache.spark.mllib.classification.SVMModel
Sets the threshold that separates positive predictions from negative predictions.
setThresholds(double[]) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
setThresholds(double[]) - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
setThresholds(double[]) - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
Set thresholds in multiclass (or binary) classification to adjust the probability of predicting each class.
setThresholds(double[]) - 类 中的方法org.apache.spark.ml.classification.ProbabilisticClassificationModel
 
setThresholds(double[]) - 类 中的方法org.apache.spark.ml.classification.ProbabilisticClassifier
 
setThresholds(double[]) - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
setThroughOrigin(boolean) - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
setTimeoutDuration(long) - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Set the timeout duration in ms for this key.
setTimeoutDuration(String) - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Set the timeout duration for this key as a string.
setTimeoutTimestamp(long) - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Set the timeout timestamp for this key as milliseconds in epoch time.
setTimeoutTimestamp(long, String) - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Set the timeout timestamp for this key as milliseconds in epoch time and an additional duration as a string (e.g. "1 hour", "2 days", etc.).
setTimeoutTimestamp(Date) - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Set the timeout timestamp for this key as a java.sql.Date.
setTimeoutTimestamp(Date, String) - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Set the timeout timestamp for this key as a java.sql.Date and an additional duration as a string (e.g. "1 hour", "2 days", etc.).
setTol(double) - 类 中的方法org.apache.spark.ml.classification.LinearSVC
Set the convergence tolerance of iterations.
setTol(double) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Set the convergence tolerance of iterations.
setTol(double) - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
Set the convergence tolerance of iterations.
setTol(double) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
setTol(double) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
setTol(double) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
Set the convergence tolerance of iterations.
setTol(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the convergence tolerance of iterations.
setTol(double) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Set the convergence tolerance of iterations.
setToLowercase(boolean) - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
setTopicConcentration(double) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setTopicConcentration(double) - 类 中的方法org.apache.spark.mllib.clustering.LDA
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
setTopicDistributionCol(String) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
setTopicDistributionCol(String) - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
setTrainRatio(double) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
setTreeStrategy(Strategy) - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
setUiRoot(ContextHandler, UIRoot) - 类 中的静态方法org.apache.spark.status.api.v1.UIRootFromServletContext
 
setupCommitter(TaskAttemptContext) - 类 中的方法org.apache.spark.internal.io.HadoopMapRedCommitProtocol
 
setUpdater(Updater) - 类 中的方法org.apache.spark.mllib.optimization.GradientDescent
Set the updater function to actually perform a gradient step in a given direction.
setUpdater(Updater) - 类 中的方法org.apache.spark.mllib.optimization.LBFGS
Set the updater function to actually perform a gradient step in a given direction.
SetupDriver(org.apache.spark.rpc.RpcEndpointRef) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
 
SetupDriver$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver$
 
setupGroups(int, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations) - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
Initializes targetLen partition groups.
setupJob(JobContext) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Setups up a job.
setupJob(JobContext) - 类 中的方法org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
 
setUpper(double) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
setUpperBoundsOnCoefficients(Matrix) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Set the upper bounds on coefficients if fitting under bound constrained optimization.
setUpperBoundsOnIntercepts(Vector) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Set the upper bounds on intercepts if fitting under bound constrained optimization.
setupTask(TaskAttemptContext) - 类 中的方法org.apache.spark.internal.io.FileCommitProtocol
Sets up a task within a job.
setupTask(TaskAttemptContext) - 类 中的方法org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
 
setupUI(org.apache.spark.ui.SparkUI) - 接口 中的方法org.apache.spark.status.AppHistoryServerPlugin
Sets up UI of this plugin to rebuild the history UI.
setUseNodeIdCache(boolean) - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
setUserBlocks(int) - 类 中的方法org.apache.spark.mllib.recommendation.ALS
Set the number of user blocks to parallelize the computation.
setUserCol(String) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
setUserCol(String) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
setValidateData(boolean) - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
Set if the algorithm should validate data before training.
setValidationIndicatorCol(String) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
setValidationIndicatorCol(String) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
setValidationTol(double) - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
setVarianceCol(String) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
setVarianceCol(String) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
setVariancePower(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the value of param variancePower.
setVectorSize(int) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setVectorSize(int) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Sets vector size (default: 100).
setVerbose(boolean) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Enables verbose reporting for SparkSubmit.
setVerbose(boolean) - 类 中的方法org.apache.spark.launcher.SparkLauncher
 
setVocabSize(int) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
setWeightCol(String) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
Sets the value of param weightCol.
setWeightCol(String) - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
Sets the value of param weightCol.
setWeightCol(String) - 类 中的方法org.apache.spark.ml.classification.LinearSVC
Set the value of param weightCol.
setWeightCol(String) - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
Sets the value of param weightCol.
setWeightCol(String) - 类 中的方法org.apache.spark.ml.classification.NaiveBayes
Sets the value of param weightCol.
setWeightCol(String) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
Sets the value of param weightCol.
setWeightCol(String) - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
setWeightCol(String) - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
setWeightCol(String) - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
setWeightCol(String) - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
setWeightCol(String) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
Sets the value of param weightCol.
setWeightCol(String) - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
Sets the value of param weightCol.
setWeightCol(String) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
Sets the value of param weightCol.
setWeightCol(String) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
setWeightCol(String) - 类 中的方法org.apache.spark.ml.regression.LinearRegression
Whether to over-/under-sample training instances according to the given weights in weightCol.
setWindowSize(int) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
setWindowSize(int) - 类 中的方法org.apache.spark.mllib.feature.Word2Vec
Sets the window of words (default: 5)
setWindowSize(int) - 类 中的方法org.apache.spark.mllib.stat.test.StreamingTest
Set the number of batches to compute significance tests over.
setWithCentering(boolean) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
setWithMean(boolean) - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
setWithMean(boolean) - 类 中的方法org.apache.spark.mllib.feature.StandardScalerModel
:: DeveloperApi ::
setWithScaling(boolean) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
setWithStd(boolean) - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
setWithStd(boolean) - 类 中的方法org.apache.spark.mllib.feature.StandardScalerModel
:: DeveloperApi ::
sha1(Column) - 类 中的静态方法org.apache.spark.sql.functions
Calculates the SHA-1 digest of a binary column and returns the value as a 40 character hex string.
sha2(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Calculates the SHA-2 family of hash functions of a binary column and returns the value as a hex string.
shape() - 类 中的方法org.apache.spark.mllib.random.GammaGenerator
 
SharedMessageLoop - org.apache.spark.rpc.netty中的类
A message loop that serves multiple RPC endpoints, using a shared thread pool.
SharedMessageLoop(SparkConf, Dispatcher, int) - 类 的构造器org.apache.spark.rpc.netty.SharedMessageLoop
 
SharedParamsCodeGen - org.apache.spark.ml.param.shared中的类
Code generator for shared params (sharedParams.scala).
SharedParamsCodeGen() - 类 的构造器org.apache.spark.ml.param.shared.SharedParamsCodeGen
 
SharedReadWrite$() - 类 的构造器org.apache.spark.ml.Pipeline.SharedReadWrite$
 
sharedState() - 类 中的方法org.apache.spark.sql.SparkSession
 
shiftLeft(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Shift the given value numBits left.
shiftRight(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
(Signed) shift the given value numBits right.
shiftRightUnsigned(Column, int) - 类 中的静态方法org.apache.spark.sql.functions
Unsigned shift the given value numBits right.
SHORT() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable short type.
ShortestPaths - org.apache.spark.graphx.lib中的类
Computes shortest paths to the given set of landmark vertices, returning a graph where each vertex attribute is a map containing the shortest-path distance to each reachable landmark.
ShortestPaths() - 类 的构造器org.apache.spark.graphx.lib.ShortestPaths
 
ShortExactNumeric - org.apache.spark.sql.types中的类
 
ShortExactNumeric() - 类 的构造器org.apache.spark.sql.types.ShortExactNumeric
 
shortName() - 接口 中的方法org.apache.spark.ml.util.MLFormatRegister
 
shortName() - 类 中的方法org.apache.spark.sql.hive.execution.HiveFileFormat
 
shortName() - 类 中的方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
shortName() - 接口 中的方法org.apache.spark.sql.sources.DataSourceRegister
The string that represents the format that this data source provider uses.
shortTimeUnitString(TimeUnit) - 类 中的静态方法org.apache.spark.streaming.ui.UIUtils
Return the short string for a TimeUnit.
ShortType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the ShortType object.
ShortType - org.apache.spark.sql.types中的类
The data type representing Short values.
ShortType() - 类 的构造器org.apache.spark.sql.types.ShortType
 
shortVersion(String) - 类 中的静态方法org.apache.spark.util.VersionUtils
Given a Spark version string, return the short version string.
shouldCloseFileAfterWrite(SparkConf, boolean) - 类 中的静态方法org.apache.spark.streaming.util.WriteAheadLogUtils
 
shouldDistributeGaussians(int, int) - 类 中的静态方法org.apache.spark.mllib.clustering.GaussianMixture
Heuristic to distribute the computation of the MultivariateGaussians, approximately when d is greater than 25 except for when k is very small.
shouldGoLeft(Vector) - 接口 中的方法org.apache.spark.ml.tree.Split
Return true (split to left) or false (split to right).
shouldGoLeft(int, Split[]) - 接口 中的方法org.apache.spark.ml.tree.Split
Return true (split to left) or false (split to right).
shouldOwn(Param<?>) - 接口 中的方法org.apache.spark.ml.param.Params
Validates that the input param belongs to this instance.
shouldRollover(long) - 接口 中的方法org.apache.spark.util.logging.RollingPolicy
Whether rollover should be initiated at this moment
show(int) - 类 中的方法org.apache.spark.sql.Dataset
Displays the Dataset in a tabular form.
show() - 类 中的方法org.apache.spark.sql.Dataset
Displays the top 20 rows of Dataset in a tabular form.
show(boolean) - 类 中的方法org.apache.spark.sql.Dataset
Displays the top 20 rows of Dataset in a tabular form.
show(int, boolean) - 类 中的方法org.apache.spark.sql.Dataset
Displays the Dataset in a tabular form.
show(int, int) - 类 中的方法org.apache.spark.sql.Dataset
Displays the Dataset in a tabular form.
show(int, int, boolean) - 类 中的方法org.apache.spark.sql.Dataset
Displays the Dataset in a tabular form.
showBytesDistribution(String, Function2<TaskInfo, TaskMetrics, Object>, Seq<Tuple2<TaskInfo, TaskMetrics>>) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
showBytesDistribution(String, Option<org.apache.spark.util.Distribution>) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
showBytesDistribution(String, org.apache.spark.util.Distribution) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
showDagVizForJob(int, Seq<org.apache.spark.ui.scope.RDDOperationGraph>) - 类 中的静态方法org.apache.spark.ui.UIUtils
Return a "DAG visualization" DOM element that expands into a visualization for a job.
showDagVizForStage(int, Option<org.apache.spark.ui.scope.RDDOperationGraph>) - 类 中的静态方法org.apache.spark.ui.UIUtils
Return a "DAG visualization" DOM element that expands into a visualization for a stage.
showDistribution(String, org.apache.spark.util.Distribution, Function1<Object, String>) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
showDistribution(String, Option<org.apache.spark.util.Distribution>, Function1<Object, String>) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
showDistribution(String, Option<org.apache.spark.util.Distribution>, String) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
showDistribution(String, String, Function2<TaskInfo, TaskMetrics, Object>, Seq<Tuple2<TaskInfo, TaskMetrics>>) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
showMillisDistribution(String, Option<org.apache.spark.util.Distribution>) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
showMillisDistribution(String, Function2<TaskInfo, TaskMetrics, Object>, Seq<Tuple2<TaskInfo, TaskMetrics>>) - 类 中的静态方法org.apache.spark.scheduler.StatsReportListener
 
showMillisDistribution(String, Function1<BatchInfo, Option<Object>>) - 类 中的方法org.apache.spark.streaming.scheduler.StatsReportListener
 
shuffle(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns a random permutation of the given array.
SHUFFLE() - 类 中的静态方法org.apache.spark.storage.BlockId
 
SHUFFLE_BATCH() - 类 中的静态方法org.apache.spark.storage.BlockId
 
SHUFFLE_DATA() - 类 中的静态方法org.apache.spark.storage.BlockId
 
SHUFFLE_INDEX() - 类 中的静态方法org.apache.spark.storage.BlockId
 
SHUFFLE_LOCAL_BLOCKS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_READ() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
SHUFFLE_READ_BLOCKED_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.TaskDetailsClassNames
 
SHUFFLE_READ_BLOCKED_TIME() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
SHUFFLE_READ_METRICS_PREFIX() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
SHUFFLE_READ_RECORDS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_READ_REMOTE_SIZE() - 类 中的静态方法org.apache.spark.ui.jobs.TaskDetailsClassNames
 
SHUFFLE_READ_REMOTE_SIZE() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
SHUFFLE_READ_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_REMOTE_BLOCKS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_REMOTE_READS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_REMOTE_READS_TO_DISK() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_SERVICE() - 类 中的静态方法org.apache.spark.metrics.MetricsSystemInstances
 
SHUFFLE_TOTAL_BLOCKS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_TOTAL_READS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_WRITE() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
SHUFFLE_WRITE_METRICS_PREFIX() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
SHUFFLE_WRITE_RECORDS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_WRITE_SIZE() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
SHUFFLE_WRITE_TIME() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
ShuffleBlockBatchId - org.apache.spark.storage中的类
 
ShuffleBlockBatchId(int, long, int, int) - 类 的构造器org.apache.spark.storage.ShuffleBlockBatchId
 
ShuffleBlockId - org.apache.spark.storage中的类
 
ShuffleBlockId(int, long, int) - 类 的构造器org.apache.spark.storage.ShuffleBlockId
 
shuffleCleaned(int) - 接口 中的方法org.apache.spark.CleanerListener
 
ShuffleDataBlockId - org.apache.spark.storage中的类
 
ShuffleDataBlockId(int, long, int) - 类 的构造器org.apache.spark.storage.ShuffleDataBlockId
 
ShuffleDataIO - org.apache.spark.shuffle.api中的接口
:: Private :: An interface for plugging in modules for storing and reading temporary shuffle data.
ShuffleDependency<K,V,C> - org.apache.spark中的类
:: DeveloperApi :: Represents a dependency on the output of a shuffle stage.
ShuffleDependency(RDD<? extends Product2<K, V>>, Partitioner, Serializer, Option<Ordering<K>>, Option<Aggregator<K, V, C>>, boolean, ShuffleWriteProcessor, ClassTag<K>, ClassTag<V>, ClassTag<C>) - 类 的构造器org.apache.spark.ShuffleDependency
 
ShuffledRDD<K,V,C> - org.apache.spark.rdd中的类
:: DeveloperApi :: The resulting RDD from a shuffle (e.g. repartitioning of data).
ShuffledRDD(RDD<? extends Product2<K, V>>, Partitioner, ClassTag<K>, ClassTag<V>, ClassTag<C>) - 类 的构造器org.apache.spark.rdd.ShuffledRDD
 
ShuffleDriverComponents - org.apache.spark.shuffle.api中的接口
:: Private :: An interface for building shuffle support modules for the Driver.
ShuffleExecutorComponents - org.apache.spark.shuffle.api中的接口
:: Private :: An interface for building shuffle support for Executors.
ShuffleFetchCompletionListener - org.apache.spark.storage中的类
A listener to be called at the completion of the ShuffleBlockFetcherIterator param: data the ShuffleBlockFetcherIterator to process
ShuffleFetchCompletionListener(ShuffleBlockFetcherIterator) - 类 的构造器org.apache.spark.storage.ShuffleFetchCompletionListener
 
shuffleFetchWaitTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
shuffleHandle() - 类 中的方法org.apache.spark.ShuffleDependency
 
shuffleId() - 类 中的方法org.apache.spark.CleanShuffle
 
shuffleId() - 类 中的方法org.apache.spark.FetchFailed
 
shuffleId() - 类 中的方法org.apache.spark.ShuffleDependency
 
shuffleId() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
 
shuffleId() - 类 中的方法org.apache.spark.storage.ShuffleBlockBatchId
 
shuffleId() - 类 中的方法org.apache.spark.storage.ShuffleBlockId
 
shuffleId() - 类 中的方法org.apache.spark.storage.ShuffleDataBlockId
 
shuffleId() - 类 中的方法org.apache.spark.storage.ShuffleIndexBlockId
 
ShuffleIndexBlockId - org.apache.spark.storage中的类
 
ShuffleIndexBlockId(int, long, int) - 类 的构造器org.apache.spark.storage.ShuffleIndexBlockId
 
shuffleLocalBlocksFetched() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
shuffleLocalBytesRead() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
shuffleManager() - 类 中的方法org.apache.spark.SparkEnv
 
ShuffleMapOutputWriter - org.apache.spark.shuffle.api中的接口
:: Private :: A top-level writer that returns child writers for persisting the output of a map task, and then commits all of the writes as one atomic operation.
ShufflePartitionWriter - org.apache.spark.shuffle.api中的接口
:: Private :: An interface for opening streams to persist partition bytes to a backing data store.
shuffleRead() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
shuffleRead$() - 类 的构造器org.apache.spark.InternalAccumulator.shuffleRead$
 
shuffleReadBytes() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
ShuffleReadMetricDistributions - org.apache.spark.status.api.v1中的类
 
ShuffleReadMetrics - org.apache.spark.status.api.v1中的类
 
shuffleReadMetrics() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
shuffleReadMetrics() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
shuffleReadRecords() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
shuffleReadRecords() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
shuffleRemoteBlocksFetched() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
shuffleRemoteBytesRead() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
shuffleRemoteBytesReadToDisk() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
ShuffleStatus - org.apache.spark中的类
Helper class used by the MapOutputTrackerMaster to perform bookkeeping for a single ShuffleMapStage.
ShuffleStatus(int) - 类 的构造器org.apache.spark.ShuffleStatus
 
shuffleWrite() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
shuffleWrite$() - 类 的构造器org.apache.spark.InternalAccumulator.shuffleWrite$
 
shuffleWriteBytes() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
ShuffleWriteMetricDistributions - org.apache.spark.status.api.v1中的类
 
ShuffleWriteMetrics - org.apache.spark.status.api.v1中的类
 
shuffleWriteMetrics() - 类 中的方法org.apache.spark.status.api.v1.TaskMetricDistributions
 
shuffleWriteMetrics() - 类 中的方法org.apache.spark.status.api.v1.TaskMetrics
 
shuffleWriteRecords() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
shuffleWriteRecords() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
shuffleWriterProcessor() - 类 中的方法org.apache.spark.ShuffleDependency
 
shuffleWriteTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
shutdown() - 接口 中的方法org.apache.spark.ExecutorPlugin
Clean up and terminate this plugin.
shutdown(ExecutorService, Duration) - 类 中的静态方法org.apache.spark.util.ThreadUtils
 
Shutdown$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown$
 
ShutdownHookManager - org.apache.spark.util中的类
Various utility methods used by Spark.
ShutdownHookManager() - 类 的构造器org.apache.spark.util.ShutdownHookManager
 
sigma() - 类 中的方法org.apache.spark.mllib.stat.distribution.MultivariateGaussian
 
sigmas() - 类 中的方法org.apache.spark.mllib.clustering.ExpectationSum
 
SignalUtils - org.apache.spark.util中的类
Contains utilities for working with posix signals.
SignalUtils() - 类 的构造器org.apache.spark.util.SignalUtils
 
signum(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the signum of the given value.
signum(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the signum of the given column.
signum(T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
signum(T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
signum(T) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
signum(T) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
signum(T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
signum(T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
signum(T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
SimpleFutureAction<T> - org.apache.spark中的类
A FutureAction holding the result of an action that triggers a single job.
simpleString() - 类 中的方法org.apache.spark.sql.types.ArrayType
 
simpleString() - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
simpleString() - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.ByteType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.CalendarIntervalType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.CharType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.DataType
Readable string representation for the type.
simpleString() - 类 中的静态方法org.apache.spark.sql.types.DateType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.DecimalType
 
simpleString() - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
simpleString() - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.IntegerType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.LongType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.MapType
 
simpleString() - 类 中的静态方法org.apache.spark.sql.types.NullType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.ObjectType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.ShortType
 
simpleString() - 类 中的静态方法org.apache.spark.sql.types.StringType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.StructType
 
simpleString() - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
simpleString() - 类 中的方法org.apache.spark.sql.types.VarcharType
 
SimpleUpdater - org.apache.spark.mllib.optimization中的类
:: DeveloperApi :: A simple updater for gradient descent *without* any regularization.
SimpleUpdater() - 类 的构造器org.apache.spark.mllib.optimization.SimpleUpdater
 
sin(Column) - 类 中的静态方法org.apache.spark.sql.functions
 
sin(String) - 类 中的静态方法org.apache.spark.sql.functions
 
SingleSpillShuffleMapOutputWriter - org.apache.spark.shuffle.api中的接口
Optional extension for partition writing that is optimized for transferring a single file to the backing store.
SingleValueExecutorMetricType - org.apache.spark.metrics中的接口
 
SingularValueDecomposition<UType,VType> - org.apache.spark.mllib.linalg中的类
Represents singular value decomposition (SVD) factors.
SingularValueDecomposition(UType, Vector, VType) - 类 的构造器org.apache.spark.mllib.linalg.SingularValueDecomposition
 
sinh(Column) - 类 中的静态方法org.apache.spark.sql.functions
 
sinh(String) - 类 中的静态方法org.apache.spark.sql.functions
 
Sink - org.apache.spark.metrics.sink中的接口
 
sink() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
SinkProgress - org.apache.spark.sql.streaming中的类
Information about progress made for a sink in the execution of a StreamingQuery during a trigger.
size() - 类 中的方法org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
 
size() - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Size of the attribute group.
size() - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
The size of Vectors in inputCol.
size() - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
size() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
size() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Size of the vector.
size() - 类 中的方法org.apache.spark.ml.param.ParamMap
Number of param pairs in this map.
size() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
size() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
size() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Size of the vector.
size(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns length of array or map.
size() - 接口 中的方法org.apache.spark.sql.Row
Number of elements in the Row.
size() - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
size() - 接口 中的方法org.apache.spark.storage.BlockData
 
size() - 类 中的方法org.apache.spark.storage.DiskBlockData
 
size() - 类 中的方法org.apache.spark.storage.memory.DeserializedMemoryEntry
 
size() - 接口 中的方法org.apache.spark.storage.memory.MemoryEntry
 
size() - 类 中的方法org.apache.spark.storage.memory.SerializedMemoryEntry
 
SIZE_IN_MEMORY() - 类 中的静态方法org.apache.spark.ui.storage.ToolTips
 
SIZE_ON_DISK() - 类 中的静态方法org.apache.spark.ui.storage.ToolTips
 
SizeEstimator - org.apache.spark.util中的类
:: DeveloperApi :: Estimates the sizes of Java objects (number of bytes of memory they occupy), for use in memory-aware caches.
SizeEstimator() - 类 的构造器org.apache.spark.util.SizeEstimator
 
sizeInBytes() - 接口 中的方法org.apache.spark.sql.connector.read.Statistics
 
sizeInBytes() - 类 中的方法org.apache.spark.sql.sources.BaseRelation
Returns an estimated size of this relation in bytes.
sketch(RDD<K>, int, ClassTag<K>) - 类 中的静态方法org.apache.spark.RangePartitioner
Sketches the input RDD via reservoir sampling on each partition.
skewness(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the skewness of the values in a group.
skewness(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the skewness of the values in a group.
skip(long) - 类 中的方法org.apache.spark.io.NioBufferedFileInputStream
 
skip(long) - 类 中的方法org.apache.spark.io.ReadAheadInputStream
 
skip(long) - 类 中的方法org.apache.spark.storage.BufferReleasingInputStream
 
skippedStages() - 类 中的方法org.apache.spark.status.LiveJob
 
skippedTasks() - 类 中的方法org.apache.spark.status.LiveJob
 
skipWhitespace() - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
slice(Column, int, int) - 类 中的静态方法org.apache.spark.sql.functions
Returns an array containing all the elements in x from index start (or starting from the end if start is negative) with the specified length.
slice(Time, Time) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return all the RDDs between 'fromDuration' to 'toDuration' (both included)
slice(org.apache.spark.streaming.Interval) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return all the RDDs defined by the Interval object (both end times included)
slice(Time, Time) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return all the RDDs between 'fromTime' to 'toTime' (both included)
slideDuration() - 类 中的方法org.apache.spark.streaming.dstream.DStream
Time interval after which the DStream generates an RDD
slideDuration() - 类 中的方法org.apache.spark.streaming.dstream.InputDStream
 
sliding(int, int) - 类 中的方法org.apache.spark.mllib.rdd.RDDFunctions
Returns an RDD from grouping items of its parent RDD in fixed size blocks by passing a sliding window over them.
sliding(int) - 类 中的方法org.apache.spark.mllib.rdd.RDDFunctions
sliding(Int, Int)* with step = 1.
smoothing() - 类 中的方法org.apache.spark.ml.classification.NaiveBayes
 
smoothing() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
smoothing() - 接口 中的方法org.apache.spark.ml.classification.NaiveBayesParams
The smoothing parameter.
SnappyCompressionCodec - org.apache.spark.io中的类
:: DeveloperApi :: Snappy implementation of CompressionCodec.
SnappyCompressionCodec(SparkConf) - 类 的构造器org.apache.spark.io.SnappyCompressionCodec
 
socketStream(String, int, Function<InputStream, Iterable<T>>, StorageLevel) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream from network source hostname:port.
socketStream(String, int, Function1<InputStream, Iterator<T>>, StorageLevel, ClassTag<T>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Creates an input stream from TCP source hostname:port.
socketTextStream(String, int, StorageLevel) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream from network source hostname:port.
socketTextStream(String, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream from network source hostname:port.
socketTextStream(String, int, StorageLevel) - 类 中的方法org.apache.spark.streaming.StreamingContext
Creates an input stream from TCP source hostname:port.
solve(double, double, DenseVector, DenseVector, DenseVector) - 接口 中的方法org.apache.spark.ml.optim.NormalEquationSolver
Solve the normal equations from summary statistics.
solve(ALS.NormalEquation, double) - 接口 中的方法org.apache.spark.ml.recommendation.ALS.LeastSquaresNESolver
Solves a least squares problem with regularization (possibly with other constraints).
solve(double[], double[]) - 类 中的静态方法org.apache.spark.mllib.linalg.CholeskyDecomposition
Solves a symmetric positive definite linear system via Cholesky factorization.
solve(double[], double[], NNLS.Workspace) - 类 中的静态方法org.apache.spark.mllib.optimization.NNLS
Solve a least squares problem, possibly with nonnegativity constraints, by a modified projected gradient method.
solver() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
solver() - 接口 中的方法org.apache.spark.ml.classification.MultilayerPerceptronParams
The solver algorithm for optimization.
solver() - 接口 中的方法org.apache.spark.ml.param.shared.HasSolver
Param for the solver algorithm for optimization.
solver() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
solver() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
The solver algorithm for optimization.
solver() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
solver() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
 
solver() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
solver() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
solver() - 接口 中的方法org.apache.spark.ml.regression.LinearRegressionParams
The solver algorithm for optimization.
Sort() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
sort(String, String...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset sorted by the specified column, all in ascending order.
sort(Column...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset sorted by the given expressions.
sort(String, Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset sorted by the specified column, all in ascending order.
sort(Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset sorted by the given expressions.
sort_array(Column) - 类 中的静态方法org.apache.spark.sql.functions
Sorts the input array for the given column in ascending order, according to the natural ordering of the array elements.
sort_array(Column, boolean) - 类 中的静态方法org.apache.spark.sql.functions
Sorts the input array for the given column in ascending or descending order, according to the natural ordering of the array elements.
sortBy(Function<T, S>, boolean, int) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return this RDD sorted by the given key function.
sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - 类 中的方法org.apache.spark.rdd.RDD
Return this RDD sorted by the given key function.
sortBy(String, String...) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Sorts the output in each bucket by the given columns.
sortBy(String, Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Sorts the output in each bucket by the given columns.
sortByKey() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Sort the RDD by key, so that each partition contains a sorted range of the elements in ascending order.
sortByKey(boolean) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Sort the RDD by key, so that each partition contains a sorted range of the elements.
sortByKey(boolean, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Sort the RDD by key, so that each partition contains a sorted range of the elements.
sortByKey(Comparator<K>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Sort the RDD by key, so that each partition contains a sorted range of the elements.
sortByKey(Comparator<K>, boolean) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Sort the RDD by key, so that each partition contains a sorted range of the elements.
sortByKey(Comparator<K>, boolean, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Sort the RDD by key, so that each partition contains a sorted range of the elements.
sortByKey(boolean, int) - 类 中的方法org.apache.spark.rdd.OrderedRDDFunctions
Sort the RDD by key, so that each partition contains a sorted range of the elements.
sortWithinPartitions(String, String...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with each partition sorted by the given expressions.
sortWithinPartitions(Column...) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with each partition sorted by the given expressions.
sortWithinPartitions(String, Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with each partition sorted by the given expressions.
sortWithinPartitions(Seq<Column>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with each partition sorted by the given expressions.
soundex(Column) - 类 中的静态方法org.apache.spark.sql.functions
Returns the soundex code for the specified expression.
Source - org.apache.spark.metrics.source中的接口
 
sourceName() - 类 中的静态方法org.apache.spark.metrics.source.CodegenMetrics
 
sourceName() - 类 中的静态方法org.apache.spark.metrics.source.HiveCatalogMetrics
 
sourceName() - 接口 中的方法org.apache.spark.metrics.source.Source
 
SourceProgress - org.apache.spark.sql.streaming中的类
Information about progress made for a source in the execution of a StreamingQuery during a trigger.
sources() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
sourceSchema(SQLContext, Option<StructType>, String, Map<String, String>) - 接口 中的方法org.apache.spark.sql.sources.StreamSourceProvider
Returns the name and schema of the source that can be used to continually read data.
spark() - 类 中的方法org.apache.spark.status.api.v1.VersionInfo
 
SPARK_CONNECTOR_NAME() - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
SPARK_CONTEXT_SHUTDOWN_PRIORITY() - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
The shutdown priority of the SparkContext instance.
SPARK_IO_ENCRYPTION_COMMONS_CONFIG_PREFIX() - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
 
SPARK_MASTER - 类 中的静态变量org.apache.spark.launcher.SparkLauncher
The Spark master.
spark_partition_id() - 类 中的静态方法org.apache.spark.sql.functions
Partition ID.
SPARK_REGEX() - 类 中的静态方法org.apache.spark.SparkMasterRegex
 
SPARK_WORKER_PREFIX() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
SPARK_WORKER_RESOURCE_FILE() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
SparkAppConfig(Seq<Tuple2<String, String>>, Option<byte[]>, Option<byte[]>) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
 
SparkAppConfig$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig$
 
SparkAppHandle - org.apache.spark.launcher中的接口
A handle to a running Spark application.
SparkAppHandle.Listener - org.apache.spark.launcher中的接口
Listener for updates to a handle's state.
SparkAppHandle.State - org.apache.spark.launcher中的枚举
Represents the application's state.
SparkAWSCredentials - org.apache.spark.streaming.kinesis中的接口
Serializable interface providing a method executors can call to obtain an AWSCredentialsProvider instance for authenticating to AWS services.
SparkAWSCredentials.Builder - org.apache.spark.streaming.kinesis中的类
Builder for SparkAWSCredentials instances.
sparkConf - 类 中的变量org.apache.spark.ExecutorPluginContext
 
SparkConf - org.apache.spark中的类
Configuration for a Spark application.
SparkConf(boolean) - 类 的构造器org.apache.spark.SparkConf
 
SparkConf() - 类 的构造器org.apache.spark.SparkConf
Create a SparkConf that loads defaults from system properties and the classpath
sparkContext() - 类 中的方法org.apache.spark.rdd.RDD
The SparkContext that created this RDD.
SparkContext - org.apache.spark中的类
Main entry point for Spark functionality.
SparkContext(SparkConf) - 类 的构造器org.apache.spark.SparkContext
 
SparkContext() - 类 的构造器org.apache.spark.SparkContext
Create a SparkContext that loads settings from system properties (for instance, when launching with .
SparkContext(String, String, SparkConf) - 类 的构造器org.apache.spark.SparkContext
Alternative constructor that allows setting common Spark properties directly
SparkContext(String, String, String, Seq<String>, Map<String, String>) - 类 的构造器org.apache.spark.SparkContext
Alternative constructor that allows setting common Spark properties directly
sparkContext() - 类 中的方法org.apache.spark.sql.SparkSession
 
sparkContext() - 类 中的方法org.apache.spark.sql.SQLContext
 
sparkContext() - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
The underlying SparkContext
sparkContext() - 类 中的方法org.apache.spark.streaming.StreamingContext
Return the associated Spark context
SparkDataStream - org.apache.spark.sql.connector.read.streaming中的接口
The base interface representing a readable data stream in a Spark streaming query.
SparkEnv - org.apache.spark中的类
:: DeveloperApi :: Holds all the runtime environment objects for a running Spark instance (either master or worker), including the serializer, RpcEnv, block manager, map output tracker, etc.
SparkEnv(String, org.apache.spark.rpc.RpcEnv, Serializer, Serializer, org.apache.spark.serializer.SerializerManager, MapOutputTracker, ShuffleManager, org.apache.spark.broadcast.BroadcastManager, org.apache.spark.storage.BlockManager, SecurityManager, org.apache.spark.metrics.MetricsSystem, MemoryManager, org.apache.spark.scheduler.OutputCommitCoordinator, SparkConf) - 类 的构造器org.apache.spark.SparkEnv
 
sparkEventFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
sparkEventToJson(SparkListenerEvent) - 类 中的静态方法org.apache.spark.util.JsonProtocol
------------------------------------------------- * JSON serialization methods for SparkListenerEvents |
SparkException - org.apache.spark中的异常错误
 
SparkException(String, Throwable) - 异常错误 的构造器org.apache.spark.SparkException
 
SparkException(String) - 异常错误 的构造器org.apache.spark.SparkException
 
SparkExecutorInfo - org.apache.spark中的接口
Exposes information about Spark Executors.
SparkExecutorInfoImpl - org.apache.spark中的类
 
SparkExecutorInfoImpl(String, int, long, int, long, long, long, long) - 类 的构造器org.apache.spark.SparkExecutorInfoImpl
 
SparkExitCode - org.apache.spark.util中的类
 
SparkExitCode() - 类 的构造器org.apache.spark.util.SparkExitCode
 
SparkFiles - org.apache.spark中的类
Resolves paths to files added through SparkContext.addFile().
SparkFiles() - 类 的构造器org.apache.spark.SparkFiles
 
SparkFirehoseListener - org.apache.spark中的类
Class that allows users to receive all SparkListener events.
SparkFirehoseListener() - 类 的构造器org.apache.spark.SparkFirehoseListener
 
SparkHadoopMapRedUtil - org.apache.spark.mapred中的类
 
SparkHadoopMapRedUtil() - 类 的构造器org.apache.spark.mapred.SparkHadoopMapRedUtil
 
SparkHadoopWriter - org.apache.spark.internal.io中的类
A helper object that saves an RDD using a Hadoop OutputFormat.
SparkHadoopWriter() - 类 的构造器org.apache.spark.internal.io.SparkHadoopWriter
 
SparkHadoopWriterUtils - org.apache.spark.internal.io中的类
A helper object that provide common utils used during saving an RDD using a Hadoop OutputFormat (both from the old mapred API and the new mapreduce API)
SparkHadoopWriterUtils() - 类 的构造器org.apache.spark.internal.io.SparkHadoopWriterUtils
 
sparkJavaOpts(SparkConf, Function1<String, Object>) - 类 中的静态方法org.apache.spark.util.Utils
Convert all spark properties set in the given SparkConf to a sequence of java options.
SparkJobInfo - org.apache.spark中的接口
Exposes information about Spark Jobs.
SparkJobInfoImpl - org.apache.spark中的类
 
SparkJobInfoImpl(int, int[], JobExecutionStatus) - 类 的构造器org.apache.spark.SparkJobInfoImpl
 
SparkLauncher - org.apache.spark.launcher中的类
Launcher for Spark applications.
SparkLauncher() - 类 的构造器org.apache.spark.launcher.SparkLauncher
 
SparkLauncher(Map<String, String>) - 类 的构造器org.apache.spark.launcher.SparkLauncher
Creates a launcher that will set the given environment variables in the child.
SparkListener - org.apache.spark.scheduler中的类
:: DeveloperApi :: A default implementation for SparkListenerInterface that has no-op implementations for all callbacks.
SparkListener() - 类 的构造器org.apache.spark.scheduler.SparkListener
 
SparkListenerApplicationEnd - org.apache.spark.scheduler中的类
 
SparkListenerApplicationEnd(long) - 类 的构造器org.apache.spark.scheduler.SparkListenerApplicationEnd
 
SparkListenerApplicationStart - org.apache.spark.scheduler中的类
 
SparkListenerApplicationStart(String, Option<String>, long, String, Option<String>, Option<Map<String, String>>, Option<Map<String, String>>) - 类 的构造器org.apache.spark.scheduler.SparkListenerApplicationStart
 
SparkListenerBlockManagerAdded - org.apache.spark.scheduler中的类
 
SparkListenerBlockManagerAdded(long, BlockManagerId, long, Option<Object>, Option<Object>) - 类 的构造器org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
SparkListenerBlockManagerRemoved - org.apache.spark.scheduler中的类
 
SparkListenerBlockManagerRemoved(long, BlockManagerId) - 类 的构造器org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
 
SparkListenerBlockUpdated - org.apache.spark.scheduler中的类
 
SparkListenerBlockUpdated(BlockUpdatedInfo) - 类 的构造器org.apache.spark.scheduler.SparkListenerBlockUpdated
 
SparkListenerBus - org.apache.spark.scheduler中的接口
A SparkListenerEvent bus that relays SparkListenerEvents to its listeners
SparkListenerEnvironmentUpdate - org.apache.spark.scheduler中的类
 
SparkListenerEnvironmentUpdate(Map<String, Seq<Tuple2<String, String>>>) - 类 的构造器org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
 
SparkListenerEvent - org.apache.spark.scheduler中的接口
 
SparkListenerExecutorAdded - org.apache.spark.scheduler中的类
 
SparkListenerExecutorAdded(long, String, ExecutorInfo) - 类 的构造器org.apache.spark.scheduler.SparkListenerExecutorAdded
 
SparkListenerExecutorBlacklisted - org.apache.spark.scheduler中的类
 
SparkListenerExecutorBlacklisted(long, String, int) - 类 的构造器org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
 
SparkListenerExecutorBlacklistedForStage - org.apache.spark.scheduler中的类
 
SparkListenerExecutorBlacklistedForStage(long, String, int, int, int) - 类 的构造器org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
 
SparkListenerExecutorMetricsUpdate - org.apache.spark.scheduler中的类
Periodic updates from executors.
SparkListenerExecutorMetricsUpdate(String, Seq<Tuple4<Object, Object, Object, Seq<AccumulableInfo>>>, Map<Tuple2<Object, Object>, ExecutorMetrics>) - 类 的构造器org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
 
SparkListenerExecutorRemoved - org.apache.spark.scheduler中的类
 
SparkListenerExecutorRemoved(long, String, String) - 类 的构造器org.apache.spark.scheduler.SparkListenerExecutorRemoved
 
SparkListenerExecutorUnblacklisted - org.apache.spark.scheduler中的类
 
SparkListenerExecutorUnblacklisted(long, String) - 类 的构造器org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
 
SparkListenerInterface - org.apache.spark.scheduler中的接口
Interface for listening to events from the Spark scheduler.
SparkListenerJobEnd - org.apache.spark.scheduler中的类
 
SparkListenerJobEnd(int, long, JobResult) - 类 的构造器org.apache.spark.scheduler.SparkListenerJobEnd
 
SparkListenerJobStart - org.apache.spark.scheduler中的类
 
SparkListenerJobStart(int, long, Seq<StageInfo>, Properties) - 类 的构造器org.apache.spark.scheduler.SparkListenerJobStart
 
SparkListenerLogStart - org.apache.spark.scheduler中的类
An internal class that describes the metadata of an event log.
SparkListenerLogStart(String) - 类 的构造器org.apache.spark.scheduler.SparkListenerLogStart
 
SparkListenerNodeBlacklisted - org.apache.spark.scheduler中的类
 
SparkListenerNodeBlacklisted(long, String, int) - 类 的构造器org.apache.spark.scheduler.SparkListenerNodeBlacklisted
 
SparkListenerNodeBlacklistedForStage - org.apache.spark.scheduler中的类
 
SparkListenerNodeBlacklistedForStage(long, String, int, int, int) - 类 的构造器org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
 
SparkListenerNodeUnblacklisted - org.apache.spark.scheduler中的类
 
SparkListenerNodeUnblacklisted(long, String) - 类 的构造器org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
 
SparkListenerSpeculativeTaskSubmitted - org.apache.spark.scheduler中的类
 
SparkListenerSpeculativeTaskSubmitted(int, int) - 类 的构造器org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
 
SparkListenerStageCompleted - org.apache.spark.scheduler中的类
 
SparkListenerStageCompleted(StageInfo) - 类 的构造器org.apache.spark.scheduler.SparkListenerStageCompleted
 
SparkListenerStageExecutorMetrics - org.apache.spark.scheduler中的类
Peak metric values for the executor for the stage, written to the history log at stage completion.
SparkListenerStageExecutorMetrics(String, int, int, ExecutorMetrics) - 类 的构造器org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
 
SparkListenerStageSubmitted - org.apache.spark.scheduler中的类
 
SparkListenerStageSubmitted(StageInfo, Properties) - 类 的构造器org.apache.spark.scheduler.SparkListenerStageSubmitted
 
SparkListenerTaskEnd - org.apache.spark.scheduler中的类
 
SparkListenerTaskEnd(int, int, String, TaskEndReason, TaskInfo, ExecutorMetrics, TaskMetrics) - 类 的构造器org.apache.spark.scheduler.SparkListenerTaskEnd
 
SparkListenerTaskGettingResult - org.apache.spark.scheduler中的类
 
SparkListenerTaskGettingResult(TaskInfo) - 类 的构造器org.apache.spark.scheduler.SparkListenerTaskGettingResult
 
SparkListenerTaskStart - org.apache.spark.scheduler中的类
 
SparkListenerTaskStart(int, int, TaskInfo) - 类 的构造器org.apache.spark.scheduler.SparkListenerTaskStart
 
SparkListenerUnpersistRDD - org.apache.spark.scheduler中的类
 
SparkListenerUnpersistRDD(int) - 类 的构造器org.apache.spark.scheduler.SparkListenerUnpersistRDD
 
SparkMasterRegex - org.apache.spark中的类
A collection of regexes for extracting information from the master string.
SparkMasterRegex() - 类 的构造器org.apache.spark.SparkMasterRegex
 
sparkProperties() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
 
sparkProperties() - 类 中的方法org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
 
SPARKR_COMMAND() - 类 中的静态方法org.apache.spark.internal.config.R
 
sparkRPackagePath(boolean) - 类 中的静态方法org.apache.spark.api.r.RUtils
Get the list of paths for R packages in various deployment modes, of which the first path is for the SparkR package itself.
sparkSession() - 接口 中的方法org.apache.spark.ml.util.BaseReadWrite
Returns the user-specified Spark Session or the default.
sparkSession() - 类 中的方法org.apache.spark.sql.Dataset
 
sparkSession() - 类 中的方法org.apache.spark.sql.dynamicpruning.PlanDynamicPruningFilters
 
sparkSession() - 接口 中的方法org.apache.spark.sql.hive.HiveStrategies
 
SparkSession - org.apache.spark.sql中的类
The entry point to programming Spark with the Dataset and DataFrame API.
sparkSession() - 类 中的方法org.apache.spark.sql.SQLContext
 
sparkSession() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Returns the SparkSession associated with this.
SparkSession.Builder - org.apache.spark.sql中的类
Builder for SparkSession.
SparkSession.implicits$ - org.apache.spark.sql中的类
(Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrames.
SparkSessionExtensions - org.apache.spark.sql中的类
:: Experimental :: Holder for injection points to the SparkSession.
SparkSessionExtensions() - 类 的构造器org.apache.spark.sql.SparkSessionExtensions
 
SparkShellLoggingFilter - org.apache.spark.internal中的类
 
SparkShellLoggingFilter() - 类 的构造器org.apache.spark.internal.SparkShellLoggingFilter
 
SparkShutdownHook - org.apache.spark.util中的类
 
SparkShutdownHook(int, Function0<BoxedUnit>) - 类 的构造器org.apache.spark.util.SparkShutdownHook
 
SparkStageInfo - org.apache.spark中的接口
Exposes information about Spark Stages.
SparkStageInfoImpl - org.apache.spark中的类
 
SparkStageInfoImpl(int, int, long, String, int, int, int, int) - 类 的构造器org.apache.spark.SparkStageInfoImpl
 
SparkStatusTracker - org.apache.spark中的类
Low-level status reporting APIs for monitoring job and stage progress.
sparkUser() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
sparkUser() - 类 中的方法org.apache.spark.scheduler.SparkListenerApplicationStart
 
sparkUser() - 类 中的方法org.apache.spark.SparkContext
 
sparkUser() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
sparkVersion() - 类 中的方法org.apache.spark.scheduler.SparkListenerLogStart
 
sparse(int, int, int[], int[], double[]) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Creates a column-major sparse matrix in Compressed Sparse Column (CSC) format.
sparse(int, int[], double[]) - 类 中的静态方法org.apache.spark.ml.linalg.Vectors
Creates a sparse vector providing its index array and value array.
sparse(int, Seq<Tuple2<Object, Object>>) - 类 中的静态方法org.apache.spark.ml.linalg.Vectors
Creates a sparse vector using unordered (index, value) pairs.
sparse(int, Iterable<Tuple2<Integer, Double>>) - 类 中的静态方法org.apache.spark.ml.linalg.Vectors
Creates a sparse vector using unordered (index, value) pairs in a Java friendly way.
sparse(int, int, int[], int[], double[]) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Creates a column-major sparse matrix in Compressed Sparse Column (CSC) format.
sparse(int, int[], double[]) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Creates a sparse vector providing its index array and value array.
sparse(int, Seq<Tuple2<Object, Object>>) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Creates a sparse vector using unordered (index, value) pairs.
sparse(int, Iterable<Tuple2<Integer, Double>>) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Creates a sparse vector using unordered (index, value) pairs in a Java friendly way.
SparseMatrix - org.apache.spark.ml.linalg中的类
Column-major sparse matrix.
SparseMatrix(int, int, int[], int[], double[], boolean) - 类 的构造器org.apache.spark.ml.linalg.SparseMatrix
 
SparseMatrix(int, int, int[], int[], double[]) - 类 的构造器org.apache.spark.ml.linalg.SparseMatrix
Column-major sparse matrix.
SparseMatrix - org.apache.spark.mllib.linalg中的类
Column-major sparse matrix.
SparseMatrix(int, int, int[], int[], double[], boolean) - 类 的构造器org.apache.spark.mllib.linalg.SparseMatrix
 
SparseMatrix(int, int, int[], int[], double[]) - 类 的构造器org.apache.spark.mllib.linalg.SparseMatrix
Column-major sparse matrix.
SparseVector - org.apache.spark.ml.linalg中的类
A sparse vector represented by an index array and a value array.
SparseVector(int, int[], double[]) - 类 的构造器org.apache.spark.ml.linalg.SparseVector
 
SparseVector - org.apache.spark.mllib.linalg中的类
A sparse vector represented by an index array and a value array.
SparseVector(int, int[], double[]) - 类 的构造器org.apache.spark.mllib.linalg.SparseVector
 
SPARSITY() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
sparsity() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
spdiag(Vector) - 类 中的静态方法org.apache.spark.ml.linalg.SparseMatrix
Generate a diagonal matrix in SparseMatrix format from the supplied values.
spdiag(Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.SparseMatrix
Generate a diagonal matrix in SparseMatrix format from the supplied values.
SpearmanCorrelation - org.apache.spark.mllib.stat.correlation中的类
Compute Spearman's correlation for two RDDs of the type RDD[Double] or the correlation matrix for an RDD of the type RDD[Vector].
SpearmanCorrelation() - 类 的构造器org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
 
SpecialLengths - org.apache.spark.api.r中的类
 
SpecialLengths() - 类 的构造器org.apache.spark.api.r.SpecialLengths
 
speculative() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
speculative() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
speye(int) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Generate a sparse Identity Matrix in Matrix format.
speye(int) - 类 中的静态方法org.apache.spark.ml.linalg.SparseMatrix
Generate an Identity Matrix in SparseMatrix format.
speye(int) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Generate a sparse Identity Matrix in Matrix format.
speye(int) - 类 中的静态方法org.apache.spark.mllib.linalg.SparseMatrix
Generate an Identity Matrix in SparseMatrix format.
SpillListener - org.apache.spark中的类
A SparkListener that detects whether spills have occurred in Spark jobs.
SpillListener() - 类 的构造器org.apache.spark.SpillListener
 
split() - 类 中的方法org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
 
split() - 类 中的方法org.apache.spark.ml.tree.InternalNode
 
Split - org.apache.spark.ml.tree中的接口
Interface for a "Split," which specifies a test made at a decision tree node to choose the left or right path.
split() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
split() - 类 中的方法org.apache.spark.mllib.tree.model.Node
 
Split - org.apache.spark.mllib.tree.model中的类
:: DeveloperApi :: Split applied to a feature param: feature feature index param: threshold Threshold for continuous feature.
Split(int, double, Enumeration.Value, List<Object>) - 类 的构造器org.apache.spark.mllib.tree.model.Split
 
split(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Splits str around matches of the given regex.
split(Column, String, int) - 类 中的静态方法org.apache.spark.sql.functions
Splits str around matches of the given regex.
splitAndCountPartitions(Iterator<String>) - 类 中的静态方法org.apache.spark.streaming.util.RawTextHelper
Splits lines and counts the words.
splitCommandString(String) - 类 中的静态方法org.apache.spark.util.Utils
Split a string of potentially quoted arguments from the command line the way that a shell would do it to determine arguments to a command.
SplitData(int, double[], int) - 类 的构造器org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
 
SplitData(int, double, int, Seq<Object>) - 类 的构造器org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
SplitData$() - 类 的构造器org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
 
SplitData$() - 类 的构造器org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
 
splitIndex() - 类 中的方法org.apache.spark.storage.RDDBlockId
 
SplitInfo - org.apache.spark.scheduler中的类
 
SplitInfo(Class<?>, String, String, long, Object) - 类 的构造器org.apache.spark.scheduler.SplitInfo
 
splits() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
Parameter for mapping continuous features into buckets.
splitsArray() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
Parameter for specifying multiple splits parameters.
spr(double, Vector, DenseVector) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
Adds alpha * x * x.t to a matrix in-place.
spr(double, Vector, double[]) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
Adds alpha * x * x.t to a matrix in-place.
spr(double, Vector, DenseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
Adds alpha * v * v.t to a matrix in-place.
spr(double, Vector, double[]) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
Adds alpha * v * v.t to a matrix in-place.
sprand(int, int, double, Random) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Generate a SparseMatrix consisting of i.i.d.
sprand(int, int, double, Random) - 类 中的静态方法org.apache.spark.ml.linalg.SparseMatrix
Generate a SparseMatrix consisting of i.i.d. uniform random numbers.
sprand(int, int, double, Random) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Generate a SparseMatrix consisting of i.i.d.
sprand(int, int, double, Random) - 类 中的静态方法org.apache.spark.mllib.linalg.SparseMatrix
Generate a SparseMatrix consisting of i.i.d. uniform random numbers.
sprandn(int, int, double, Random) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Generate a SparseMatrix consisting of i.i.d.
sprandn(int, int, double, Random) - 类 中的静态方法org.apache.spark.ml.linalg.SparseMatrix
Generate a SparseMatrix consisting of i.i.d. gaussian random numbers.
sprandn(int, int, double, Random) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Generate a SparseMatrix consisting of i.i.d.
sprandn(int, int, double, Random) - 类 中的静态方法org.apache.spark.mllib.linalg.SparseMatrix
Generate a SparseMatrix consisting of i.i.d. gaussian random numbers.
SPREAD_OUT_APPS() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
sqdist(Vector, Vector) - 类 中的静态方法org.apache.spark.ml.linalg.Vectors
Returns the squared distance between two Vectors.
sqdist(Vector, Vector) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Returns the squared distance between two Vectors.
sql(String) - 类 中的方法org.apache.spark.sql.SparkSession
Executes a SQL query using Spark, returning the result as a DataFrame.
sql(String) - 类 中的方法org.apache.spark.sql.SQLContext
Executes a SQL query using Spark, returning the result as a DataFrame.
sql() - 类 中的方法org.apache.spark.sql.types.ArrayType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
sql() - 类 中的方法org.apache.spark.sql.types.DataType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.DateType
 
sql() - 类 中的方法org.apache.spark.sql.types.DecimalType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.LongType
 
sql() - 类 中的方法org.apache.spark.sql.types.MapType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.NullType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.StringType
 
sql() - 类 中的方法org.apache.spark.sql.types.StructType
 
sql() - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 
sqlContext() - 接口 中的方法org.apache.spark.ml.util.BaseReadWrite
Returns the user-specified SQL context or the default.
sqlContext() - 类 中的方法org.apache.spark.sql.Dataset
 
sqlContext() - 类 中的方法org.apache.spark.sql.sources.BaseRelation
 
sqlContext() - 类 中的方法org.apache.spark.sql.SparkSession
A wrapped version of this session in the form of a SQLContext, for backward compatibility.
SQLContext - org.apache.spark.sql中的类
The entry point for working with structured data (rows and columns) in Spark 1.x.
SQLContext.implicits$ - org.apache.spark.sql中的类
(Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrames.
SQLDataTypes - org.apache.spark.ml.linalg中的类
:: DeveloperApi :: SQL data types for vectors and matrices.
SQLDataTypes() - 类 的构造器org.apache.spark.ml.linalg.SQLDataTypes
 
SQLImplicits - org.apache.spark.sql中的类
A collection of implicit methods for converting common Scala objects into Datasets.
SQLImplicits() - 类 的构造器org.apache.spark.sql.SQLImplicits
 
SQLImplicits.StringToColumn - org.apache.spark.sql中的类
Converts $"col name" into a Column.
SQLTransformer - org.apache.spark.ml.feature中的类
Implements the transformations which are defined by SQL statement.
SQLTransformer(String) - 类 的构造器org.apache.spark.ml.feature.SQLTransformer
 
SQLTransformer() - 类 的构造器org.apache.spark.ml.feature.SQLTransformer
 
sqlType() - 类 中的方法org.apache.spark.mllib.linalg.VectorUDT
 
SQLUserDefinedType - org.apache.spark.sql.types中的注释类型
::DeveloperApi:: A user-defined type which can be automatically recognized by a SQLContext and registered.
SQLUtils - org.apache.spark.sql.api.r中的类
 
SQLUtils() - 类 的构造器org.apache.spark.sql.api.r.SQLUtils
 
sqrt(Column) - 类 中的静态方法org.apache.spark.sql.functions
Computes the square root of the specified float value.
sqrt(String) - 类 中的静态方法org.apache.spark.sql.functions
Computes the square root of the specified float value.
Sqrt$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
 
SquaredError - org.apache.spark.mllib.tree.loss中的类
:: DeveloperApi :: Class for squared error loss calculation.
SquaredError() - 类 的构造器org.apache.spark.mllib.tree.loss.SquaredError
 
SquaredEuclideanSilhouette - org.apache.spark.ml.evaluation中的类
SquaredEuclideanSilhouette computes the average of the Silhouette over all the data of the dataset, which is a measure of how appropriately the data have been clustered.
SquaredEuclideanSilhouette() - 类 的构造器org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
 
SquaredEuclideanSilhouette.ClusterStats - org.apache.spark.ml.evaluation中的类
 
SquaredEuclideanSilhouette.ClusterStats$ - org.apache.spark.ml.evaluation中的类
 
SquaredL2Updater - org.apache.spark.mllib.optimization中的类
:: DeveloperApi :: Updater for L2 regularized problems.
SquaredL2Updater() - 类 的构造器org.apache.spark.mllib.optimization.SquaredL2Updater
 
squaredNormSum() - 类 中的方法org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
 
Src - 类 中的静态变量org.apache.spark.graphx.TripletFields
Expose the source and edge fields but not the destination field.
srcAttr() - 类 中的方法org.apache.spark.graphx.EdgeContext
The vertex attribute of the edge's source vertex.
srcAttr() - 类 中的方法org.apache.spark.graphx.EdgeTriplet
The source vertex attribute
srcAttr() - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
srcCol() - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
srcCol() - 接口 中的方法org.apache.spark.ml.clustering.PowerIterationClusteringParams
Param for the name of the input column for source vertex IDs.
srcId() - 类 中的方法org.apache.spark.graphx.Edge
 
srcId() - 类 中的方法org.apache.spark.graphx.EdgeContext
The vertex id of the edge's source vertex.
srcId() - 类 中的方法org.apache.spark.graphx.impl.AggregatingEdgeContext
 
srdd() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
 
ssc() - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
 
stackTrace() - 类 中的方法org.apache.spark.ExceptionFailure
 
StackTrace - org.apache.spark.status.api.v1中的类
 
StackTrace(Seq<String>) - 类 的构造器org.apache.spark.status.api.v1.StackTrace
 
stackTrace() - 类 中的方法org.apache.spark.status.api.v1.ThreadStackTrace
 
stackTraceFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
stackTraceToJson(StackTraceElement[]) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
stage() - 类 中的方法org.apache.spark.scheduler.AskPermissionToCommitOutput
 
STAGE() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
STAGE_DAG() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
STAGE_TIMELINE() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
stageAttempt() - 类 中的方法org.apache.spark.scheduler.AskPermissionToCommitOutput
 
stageAttemptId() - 类 中的方法org.apache.spark.ContextBarrierId
 
stageAttemptId() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
 
stageAttemptId() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
 
stageAttemptId() - 类 中的方法org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
 
stageAttemptId() - 类 中的方法org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
 
stageAttemptId() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskEnd
 
stageAttemptId() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskStart
 
stageAttemptNumber() - 类 中的方法org.apache.spark.BarrierTaskContext
 
stageAttemptNumber() - 类 中的方法org.apache.spark.TaskContext
How many times the stage that this task belongs to has been attempted.
stageCompletedFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
stageCompletedToJson(SparkListenerStageCompleted) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
stageCreate(Identifier, StructType, Transform[], Map<String, String>) - 接口 中的方法org.apache.spark.sql.connector.catalog.StagingTableCatalog
Stage the creation of a table, preparing it to be committed into the metastore.
stageCreateOrReplace(Identifier, StructType, Transform[], Map<String, String>) - 接口 中的方法org.apache.spark.sql.connector.catalog.StagingTableCatalog
Stage the creation or replacement of a table, preparing it to be committed into the metastore when the returned table's StagedTable.commitStagedChanges() is called.
StageData - org.apache.spark.status.api.v1中的类
 
StagedTable - org.apache.spark.sql.connector.catalog中的接口
Represents a table which is staged for being committed to the metastore.
stageExecutorMetricsFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
stageExecutorMetricsToJson(SparkListenerStageExecutorMetrics) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
stageFailed(String) - 类 中的方法org.apache.spark.scheduler.StageInfo
 
stageId() - 类 中的方法org.apache.spark.BarrierTaskContext
 
stageId() - 类 中的方法org.apache.spark.ContextBarrierId
 
stageId() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
stageId() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
 
stageId() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
 
stageId() - 类 中的方法org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
 
stageId() - 类 中的方法org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
 
stageId() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskEnd
 
stageId() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskStart
 
stageId() - 类 中的方法org.apache.spark.scheduler.StageInfo
 
stageId() - 接口 中的方法org.apache.spark.SparkStageInfo
 
stageId() - 类 中的方法org.apache.spark.SparkStageInfoImpl
 
stageId() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
stageId() - 类 中的方法org.apache.spark.TaskContext
The ID of the stage that this task belong to.
stageIds() - 类 中的方法org.apache.spark.scheduler.SparkListenerJobStart
 
stageIds() - 接口 中的方法org.apache.spark.SparkJobInfo
 
stageIds() - 类 中的方法org.apache.spark.SparkJobInfoImpl
 
stageIds() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
stageIds() - 类 中的方法org.apache.spark.status.LiveJob
 
stageIds() - 类 中的方法org.apache.spark.status.SchedulerPool
 
stageInfo() - 类 中的方法org.apache.spark.scheduler.SparkListenerStageCompleted
 
stageInfo() - 类 中的方法org.apache.spark.scheduler.SparkListenerStageSubmitted
 
StageInfo - org.apache.spark.scheduler中的类
:: DeveloperApi :: Stores information about a stage to pass from the scheduler to SparkListeners.
StageInfo(int, int, String, int, Seq<RDDInfo>, Seq<Object>, String, TaskMetrics, Seq<Seq<TaskLocation>>, Option<Object>) - 类 的构造器org.apache.spark.scheduler.StageInfo
 
stageInfoFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
--------------------------------------------------------------------- * JSON deserialization methods for classes SparkListenerEvents depend on |
stageInfos() - 类 中的方法org.apache.spark.scheduler.SparkListenerJobStart
 
stageInfoToJson(StageInfo) - 类 中的静态方法org.apache.spark.util.JsonProtocol
------------------------------------------------------------------- * JSON serialization methods for classes SparkListenerEvents depend on |
stageName() - 类 中的方法org.apache.spark.ml.clustering.InternalKMeansModelWriter
 
stageName() - 类 中的方法org.apache.spark.ml.clustering.PMMLKMeansModelWriter
 
stageName() - 类 中的方法org.apache.spark.ml.regression.InternalLinearRegressionModelWriter
 
stageName() - 类 中的方法org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter
 
stageName() - 接口 中的方法org.apache.spark.ml.util.MLFormatRegister
The string that represents the stage type that this writer supports.
stageReplace(Identifier, StructType, Transform[], Map<String, String>) - 接口 中的方法org.apache.spark.sql.connector.catalog.StagingTableCatalog
Stage the replacement of a table, preparing it to be committed into the metastore when the returned table's StagedTable.commitStagedChanges() is called.
stages() - 类 中的方法org.apache.spark.ml.Pipeline
param for pipeline stages
stages() - 类 中的方法org.apache.spark.ml.PipelineModel
 
StageStatus - org.apache.spark.status.api.v1中的枚举
 
stageSubmittedFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
stageSubmittedToJson(SparkListenerStageSubmitted) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
StagingTableCatalog - org.apache.spark.sql.connector.catalog中的接口
An optional mix-in for implementations of TableCatalog that support staging creation of the a table before committing the table's metadata along with its contents in CREATE TABLE AS SELECT or REPLACE TABLE AS SELECT operations.
standardization() - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
standardization() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
standardization() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
standardization() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
standardization() - 接口 中的方法org.apache.spark.ml.param.shared.HasStandardization
Param for whether to standardize the training features before fitting the model.
standardization() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
standardization() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
StandardNormalGenerator - org.apache.spark.mllib.random中的类
:: DeveloperApi :: Generates i.i.d. samples from the standard normal distribution.
StandardNormalGenerator() - 类 的构造器org.apache.spark.mllib.random.StandardNormalGenerator
 
StandardScaler - org.apache.spark.ml.feature中的类
Standardizes features by removing the mean and scaling to unit variance using column summary statistics on the samples in the training set.
StandardScaler(String) - 类 的构造器org.apache.spark.ml.feature.StandardScaler
 
StandardScaler() - 类 的构造器org.apache.spark.ml.feature.StandardScaler
 
StandardScaler - org.apache.spark.mllib.feature中的类
Standardizes features by removing the mean and scaling to unit std using column summary statistics on the samples in the training set.
StandardScaler(boolean, boolean) - 类 的构造器org.apache.spark.mllib.feature.StandardScaler
 
StandardScaler() - 类 的构造器org.apache.spark.mllib.feature.StandardScaler
 
StandardScalerModel - org.apache.spark.ml.feature中的类
Model fitted by StandardScaler.
StandardScalerModel - org.apache.spark.mllib.feature中的类
Represents a StandardScaler model that can transform vectors.
StandardScalerModel(Vector, Vector, boolean, boolean) - 类 的构造器org.apache.spark.mllib.feature.StandardScalerModel
 
StandardScalerModel(Vector, Vector) - 类 的构造器org.apache.spark.mllib.feature.StandardScalerModel
 
StandardScalerModel(Vector) - 类 的构造器org.apache.spark.mllib.feature.StandardScalerModel
 
StandardScalerParams - org.apache.spark.ml.feature中的接口
starGraph(SparkContext, int) - 类 中的静态方法org.apache.spark.graphx.util.GraphGenerators
Create a star graph with vertex 0 being the center.
start() - 接口 中的方法org.apache.spark.metrics.sink.Sink
 
start() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
 
start() - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
start(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Starts the execution of the streaming query, which will continually output results to the given path as new data arrives.
start() - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Starts the execution of the streaming query, which will continually output results to the given path as new data arrives.
start() - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Start the execution of the streams.
start() - 类 中的方法org.apache.spark.streaming.dstream.ConstantInputDStream
 
start() - 类 中的方法org.apache.spark.streaming.dstream.InputDStream
Method called to start receiving data.
start() - 类 中的方法org.apache.spark.streaming.dstream.ReceiverInputDStream
 
start() - 类 中的方法org.apache.spark.streaming.StreamingContext
Start the execution of the streams.
startApplication(SparkAppHandle.Listener...) - 类 中的方法org.apache.spark.launcher.AbstractLauncher
Starts a Spark application.
startApplication(SparkAppHandle.Listener...) - 类 中的方法org.apache.spark.launcher.InProcessLauncher
Starts a Spark application.
startApplication(SparkAppHandle.Listener...) - 类 中的方法org.apache.spark.launcher.SparkLauncher
Starts a Spark application.
startIndexInLevel(int) - 类 中的静态方法org.apache.spark.mllib.tree.model.Node
Return the index of the first node in the given level.
startJettyServer(String, int, org.apache.spark.SSLOptions, SparkConf, String) - 类 中的静态方法org.apache.spark.ui.JettyUtils
Attempt to start a Jetty server bound to the supplied hostName:port using the given context handlers.
startOffset() - 类 中的方法org.apache.spark.sql.streaming.SourceProgress
 
startOffset() - 异常错误 中的方法org.apache.spark.sql.streaming.StreamingQueryException
 
startPosition() - 异常错误 中的方法org.apache.spark.sql.AnalysisException
 
startReduceId() - 类 中的方法org.apache.spark.storage.ShuffleBlockBatchId
 
startServiceOnPort(int, Function1<Object, Tuple2<T, Object>>, SparkConf, String) - 类 中的静态方法org.apache.spark.util.Utils
Attempt to start a service on the given port, or fail after a number of attempts.
startsWith(Column) - 类 中的方法org.apache.spark.sql.Column
String starts with.
startsWith(String) - 类 中的方法org.apache.spark.sql.Column
String starts with another string literal.
startTime() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
startTime() - 类 中的方法org.apache.spark.SparkContext
 
startTime() - 类 中的方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
startTime() - 类 中的方法org.apache.spark.status.api.v1.streaming.OutputOperationInfo
 
startTime() - 类 中的方法org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
startTime() - 类 中的方法org.apache.spark.streaming.scheduler.OutputOperationInfo
 
stat() - 类 中的方法org.apache.spark.sql.Dataset
Returns a DataFrameStatFunctions for working statistic functions support.
StatCounter - org.apache.spark.util中的类
A class for tracking the statistics of a set of numbers (count, mean and variance) in a numerically robust way.
StatCounter(TraversableOnce<Object>) - 类 的构造器org.apache.spark.util.StatCounter
 
StatCounter() - 类 的构造器org.apache.spark.util.StatCounter
Initialize the StatCounter with no values.
state() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
 
state() - 类 中的方法org.apache.spark.scheduler.local.StatusUpdate
 
State<S> - org.apache.spark.streaming中的类
:: Experimental :: Abstract class for getting and updating the state in mapping function used in the mapWithState operation of a pair DStream (Scala) or a JavaPairDStream (Java).
State() - 类 的构造器org.apache.spark.streaming.State
 
stateChanged(SparkAppHandle) - 接口 中的方法org.apache.spark.launcher.SparkAppHandle.Listener
Callback for changes in the handle's state.
statement() - 类 中的方法org.apache.spark.ml.feature.SQLTransformer
SQL statement parameter.
StateOperatorProgress - org.apache.spark.sql.streaming中的类
Information about updates made to stateful operators in a StreamingQuery during a trigger.
stateOperators() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
stateSnapshots() - 类 中的方法org.apache.spark.streaming.api.java.JavaMapWithStateDStream
 
stateSnapshots() - 类 中的方法org.apache.spark.streaming.dstream.MapWithStateDStream
Return a pair DStream where each RDD is the snapshot of the state of all the keys.
StateSpec<KeyType,ValueType,StateType,MappedType> - org.apache.spark.streaming中的类
:: Experimental :: Abstract class representing all the specifications of the DStream transformation mapWithState operation of a pair DStream (Scala) or a JavaPairDStream (Java).
StateSpec() - 类 的构造器org.apache.spark.streaming.StateSpec
 
staticPageRank(int, double) - 类 中的方法org.apache.spark.graphx.GraphOps
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
staticParallelPersonalizedPageRank(long[], int, double) - 类 中的方法org.apache.spark.graphx.GraphOps
Run parallel personalized PageRank for a given array of source vertices, such that all random walks are started relative to the source vertices
staticPersonalizedPageRank(long, int, double) - 类 中的方法org.apache.spark.graphx.GraphOps
Run Personalized PageRank for a fixed number of iterations with with all iterations originating at the source node returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
StaticSources - org.apache.spark.metrics.source中的类
 
StaticSources() - 类 的构造器org.apache.spark.metrics.source.StaticSources
 
statistic() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTestResult
 
statistic() - 类 中的方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
 
statistic() - 接口 中的方法org.apache.spark.mllib.stat.test.TestResult
Test statistic.
Statistics - org.apache.spark.mllib.stat中的类
API for statistical functions in MLlib.
Statistics() - 类 的构造器org.apache.spark.mllib.stat.Statistics
 
Statistics - org.apache.spark.sql.connector.read中的接口
An interface to represent statistics for a data source, which is returned by SupportsReportStatistics.estimateStatistics().
stats() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return a StatCounter object that captures the mean, variance and count of the RDD's elements in one operation.
stats() - 类 中的方法org.apache.spark.mllib.tree.model.Node
 
stats() - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Return a StatCounter object that captures the mean, variance and count of the RDD's elements in one operation.
StatsdMetricType - org.apache.spark.metrics.sink中的类
 
StatsdMetricType() - 类 的构造器org.apache.spark.metrics.sink.StatsdMetricType
 
StatsReportListener - org.apache.spark.scheduler中的类
:: DeveloperApi :: Simple SparkListener that logs a few summary statistics when each stage completes.
StatsReportListener() - 类 的构造器org.apache.spark.scheduler.StatsReportListener
 
StatsReportListener - org.apache.spark.streaming.scheduler中的类
:: DeveloperApi :: A simple StreamingListener that logs summary statistics across Spark Streaming batches param: numBatchInfos Number of last batches to consider for generating statistics (default: 10)
StatsReportListener(int) - 类 的构造器org.apache.spark.streaming.scheduler.StatsReportListener
 
Status - org.apache.spark.internal.config中的类
 
Status() - 类 的构造器org.apache.spark.internal.config.Status
 
status() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
status() - 接口 中的方法org.apache.spark.SparkJobInfo
 
status() - 类 中的方法org.apache.spark.SparkJobInfoImpl
 
status() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Returns the current status of the query.
status() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
status() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
status() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
status() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
status() - 类 中的方法org.apache.spark.status.LiveJob
 
status() - 类 中的方法org.apache.spark.status.LiveStage
 
STATUS() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
status() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
 
statusTracker() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
 
statusTracker() - 类 中的方法org.apache.spark.SparkContext
 
StatusUpdate(String, long, Enumeration.Value, org.apache.spark.util.SerializableBuffer, Map<String, ResourceInformation>) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
 
StatusUpdate - org.apache.spark.scheduler.local中的类
 
StatusUpdate(long, Enumeration.Value, ByteBuffer) - 类 的构造器org.apache.spark.scheduler.local.StatusUpdate
 
StatusUpdate$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
 
STD() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
std() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
std() - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
std() - 类 中的方法org.apache.spark.mllib.feature.StandardScalerModel
 
std() - 类 中的方法org.apache.spark.mllib.random.LogNormalGenerator
 
stddev(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: alias for stddev_samp.
stddev(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: alias for stddev_samp.
stddev_pop(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the population standard deviation of the expression in a group.
stddev_pop(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the population standard deviation of the expression in a group.
stddev_samp(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the sample standard deviation of the expression in a group.
stddev_samp(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the sample standard deviation of the expression in a group.
stdev() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Compute the population standard deviation of this RDD's elements.
stdev() - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Compute the population standard deviation of this RDD's elements.
stdev() - 类 中的方法org.apache.spark.util.StatCounter
Return the population standard deviation of the values.
stepSize() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
stepSize() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
stepSize() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
stepSize() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
stepSize() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
stepSize() - 接口 中的方法org.apache.spark.ml.param.shared.HasStepSize
Param for Step size to be used for each iteration of optimization (&gt; 0).
stepSize() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
stepSize() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
stepSize() - 接口 中的方法org.apache.spark.ml.tree.GBTParams
Param for Step size (a.k.a. learning rate) in interval (0, 1] for shrinking the contribution of each estimator.
stop() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Shut down the SparkContext.
stop() - 接口 中的方法org.apache.spark.broadcast.BroadcastFactory
 
stop() - 接口 中的方法org.apache.spark.launcher.SparkAppHandle
Asks the application to stop.
stop() - 接口 中的方法org.apache.spark.metrics.sink.Sink
 
stop() - 类 中的方法org.apache.spark.rpc.netty.MessageLoop
 
stop() - 接口 中的方法org.apache.spark.rpc.RpcEndpoint
A convenient method to stop RpcEndpoint.
stop() - 接口 中的方法org.apache.spark.scheduler.SchedulerBackend
 
stop() - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
stop() - 类 中的方法org.apache.spark.SparkContext
Shut down the SparkContext.
stop() - 接口 中的方法org.apache.spark.sql.connector.read.streaming.SparkDataStream
Stop this source and free any resources it has allocated.
stop() - 类 中的方法org.apache.spark.sql.SparkSession
Stop the underlying SparkContext.
stop() - 接口 中的方法org.apache.spark.sql.streaming.StreamingQuery
Stops the execution of this query if it is running.
stop() - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Stop the execution of the streams.
stop(boolean) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Stop the execution of the streams.
stop(boolean, boolean) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Stop the execution of the streams.
stop() - 类 中的方法org.apache.spark.streaming.dstream.ConstantInputDStream
 
stop() - 类 中的方法org.apache.spark.streaming.dstream.InputDStream
Method called to stop receiving data.
stop() - 类 中的方法org.apache.spark.streaming.dstream.ReceiverInputDStream
 
stop(String) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Stop the receiver completely.
stop(String, Throwable) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Stop the receiver completely due to an exception
stop(boolean) - 类 中的方法org.apache.spark.streaming.StreamingContext
Stop the execution of the streams immediately (does not wait for all received data to be processed).
stop(boolean, boolean) - 类 中的方法org.apache.spark.streaming.StreamingContext
Stop the execution of the streams, with option of ensuring all received data has been processed.
StopAllReceivers - org.apache.spark.streaming.scheduler中的类
This message will trigger ReceiverTrackerEndpoint to send stop signals to all registered receivers.
StopAllReceivers() - 类 的构造器org.apache.spark.streaming.scheduler.StopAllReceivers
 
StopBlockManagerMaster$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster$
 
StopCoordinator - org.apache.spark.scheduler中的类
 
StopCoordinator() - 类 的构造器org.apache.spark.scheduler.StopCoordinator
 
StopDriver$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver$
 
StopExecutor - org.apache.spark.scheduler.local中的类
 
StopExecutor() - 类 的构造器org.apache.spark.scheduler.local.StopExecutor
 
StopExecutor$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor$
 
StopExecutors$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors$
 
StopMapOutputTracker - org.apache.spark中的类
 
StopMapOutputTracker() - 类 的构造器org.apache.spark.StopMapOutputTracker
 
StopReceiver - org.apache.spark.streaming.receiver中的类
 
StopReceiver() - 类 的构造器org.apache.spark.streaming.receiver.StopReceiver
 
stopWords() - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
The words to be filtered out.
StopWordsRemover - org.apache.spark.ml.feature中的类
A feature transformer that filters out stop words from input.
StopWordsRemover(String) - 类 的构造器org.apache.spark.ml.feature.StopWordsRemover
 
StopWordsRemover() - 类 的构造器org.apache.spark.ml.feature.StopWordsRemover
 
storage() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
STORAGE_LEVEL() - 类 中的静态方法org.apache.spark.ui.storage.ToolTips
 
STORAGE_MEMORY() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
storageLevel() - 类 中的方法org.apache.spark.sql.Dataset
Get the Dataset's current storage level, or StorageLevel.NONE if not persisted.
storageLevel() - 类 中的方法org.apache.spark.status.api.v1.RDDPartitionInfo
 
storageLevel() - 类 中的方法org.apache.spark.status.api.v1.RDDStorageInfo
 
storageLevel() - 类 中的方法org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
storageLevel() - 类 中的方法org.apache.spark.storage.BlockStatus
 
storageLevel() - 类 中的方法org.apache.spark.storage.BlockUpdatedInfo
 
storageLevel() - 类 中的方法org.apache.spark.storage.RDDInfo
 
StorageLevel - org.apache.spark.storage中的类
:: DeveloperApi :: Flags for controlling the storage of an RDD.
StorageLevel() - 类 的构造器org.apache.spark.storage.StorageLevel
 
storageLevel() - 类 中的方法org.apache.spark.streaming.receiver.Receiver
 
storageLevelFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
StorageLevels - org.apache.spark.api.java中的类
Expose some commonly useful storage level constants.
StorageLevels() - 类 的构造器org.apache.spark.api.java.StorageLevels
 
storageLevelToJson(StorageLevel) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
StorageUtils - org.apache.spark.storage中的类
Helper methods for storage-related objects.
StorageUtils() - 类 的构造器org.apache.spark.storage.StorageUtils
 
store(T) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Store a single item of received data to Spark's memory.
store(ArrayBuffer<T>) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Store an ArrayBuffer of received data as a data block into Spark's memory.
store(ArrayBuffer<T>, Object) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Store an ArrayBuffer of received data as a data block into Spark's memory.
store(Iterator<T>) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Store an iterator of received data as a data block into Spark's memory.
store(Iterator<T>, Object) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Store an iterator of received data as a data block into Spark's memory.
store(Iterator<T>) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Store an iterator of received data as a data block into Spark's memory.
store(Iterator<T>, Object) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Store an iterator of received data as a data block into Spark's memory.
store(ByteBuffer) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Store the bytes of received data as a data block into Spark's memory.
store(ByteBuffer, Object) - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Store the bytes of received data as a data block into Spark's memory.
storeBlock(StreamBlockId, ReceivedBlock) - 接口 中的方法org.apache.spark.streaming.receiver.ReceivedBlockHandler
Store a received block with the given block id and return related metadata
storeValue(T) - 类 中的方法org.apache.spark.storage.memory.DeserializedValuesHolder
 
storeValue(T) - 类 中的方法org.apache.spark.storage.memory.SerializedValuesHolder
 
storeValue(T) - 接口 中的方法org.apache.spark.storage.memory.ValuesHolder
 
strategy() - 类 中的方法org.apache.spark.ml.feature.Imputer
 
strategy() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
strategy() - 接口 中的方法org.apache.spark.ml.feature.ImputerParams
The imputation strategy.
Strategy - org.apache.spark.mllib.tree.configuration中的类
Stores all the configuration options for tree construction param: algo Learning goal.
Strategy(Enumeration.Value, Impurity, int, int, int, Enumeration.Value, Map<Object, Object>, int, double, int, double, boolean, int, double) - 类 的构造器org.apache.spark.mllib.tree.configuration.Strategy
 
Strategy(Enumeration.Value, Impurity, int, int, int, Enumeration.Value, Map<Object, Object>, int, double, int, double, boolean, int) - 类 的构造器org.apache.spark.mllib.tree.configuration.Strategy
Backwards compatible constructor for Strategy
Strategy(Enumeration.Value, Impurity, int, int, int, Map<Integer, Integer>) - 类 的构造器org.apache.spark.mllib.tree.configuration.Strategy
Java-friendly constructor for Strategy
StratifiedSamplingUtils - org.apache.spark.util.random中的类
Auxiliary functions and data structures for the sampleByKey method in PairRDDFunctions.
StratifiedSamplingUtils() - 类 的构造器org.apache.spark.util.random.StratifiedSamplingUtils
 
STREAM() - 类 中的静态方法org.apache.spark.storage.BlockId
 
StreamBlockId - org.apache.spark.storage中的类
 
StreamBlockId(int, long) - 类 的构造器org.apache.spark.storage.StreamBlockId
 
streamId() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
streamId() - 类 中的方法org.apache.spark.storage.StreamBlockId
 
streamId() - 类 中的方法org.apache.spark.streaming.receiver.Receiver
Get the unique identifier the receiver input stream that this receiver is associated with.
streamId() - 类 中的方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
streamIdToInputInfo() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
 
Streaming - org.apache.spark.internal.config中的类
 
Streaming() - 类 的构造器org.apache.spark.internal.config.Streaming
 
StreamingContext - org.apache.spark.streaming中的类
Main entry point for Spark Streaming functionality.
StreamingContext(SparkContext, Duration) - 类 的构造器org.apache.spark.streaming.StreamingContext
Create a StreamingContext using an existing SparkContext.
StreamingContext(SparkConf, Duration) - 类 的构造器org.apache.spark.streaming.StreamingContext
Create a StreamingContext by providing the configuration necessary for a new SparkContext.
StreamingContext(String, String, Duration, String, Seq<String>, Map<String, String>) - 类 的构造器org.apache.spark.streaming.StreamingContext
Create a StreamingContext by providing the details necessary for creating a new SparkContext.
StreamingContext(String, Configuration) - 类 的构造器org.apache.spark.streaming.StreamingContext
Recreate a StreamingContext from a checkpoint file.
StreamingContext(String) - 类 的构造器org.apache.spark.streaming.StreamingContext
Recreate a StreamingContext from a checkpoint file.
StreamingContext(String, SparkContext) - 类 的构造器org.apache.spark.streaming.StreamingContext
Recreate a StreamingContext from a checkpoint file using an existing SparkContext.
StreamingContextPythonHelper - org.apache.spark.streaming中的类
 
StreamingContextPythonHelper() - 类 的构造器org.apache.spark.streaming.StreamingContextPythonHelper
 
StreamingContextState - org.apache.spark.streaming中的枚举
:: DeveloperApi :: Represents the state of a StreamingContext.
StreamingDataWriterFactory - org.apache.spark.sql.connector.write.streaming中的接口
A factory of DataWriter returned by StreamingWrite.createStreamingWriterFactory(), which is responsible for creating and initializing the actual data writer at executor side.
StreamingKMeans - org.apache.spark.mllib.clustering中的类
StreamingKMeans provides methods for configuring a streaming k-means analysis, training the model on streaming, and using the model to make predictions on streaming data.
StreamingKMeans(int, double, String) - 类 的构造器org.apache.spark.mllib.clustering.StreamingKMeans
 
StreamingKMeans() - 类 的构造器org.apache.spark.mllib.clustering.StreamingKMeans
 
StreamingKMeansModel - org.apache.spark.mllib.clustering中的类
StreamingKMeansModel extends MLlib's KMeansModel for streaming algorithms, so it can keep track of a continuously updated weight associated with each cluster, and also update the model by doing a single iteration of the standard k-means algorithm.
StreamingKMeansModel(Vector[], double[]) - 类 的构造器org.apache.spark.mllib.clustering.StreamingKMeansModel
 
StreamingLinearAlgorithm<M extends GeneralizedLinearModel,A extends GeneralizedLinearAlgorithm<M>> - org.apache.spark.mllib.regression中的类
:: DeveloperApi :: StreamingLinearAlgorithm implements methods for continuously training a generalized linear model on streaming data, and using it for prediction on (possibly different) streaming data.
StreamingLinearAlgorithm() - 类 的构造器org.apache.spark.mllib.regression.StreamingLinearAlgorithm
 
StreamingLinearRegressionWithSGD - org.apache.spark.mllib.regression中的类
Train or predict a linear regression model on streaming data.
StreamingLinearRegressionWithSGD() - 类 的构造器org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
Construct a StreamingLinearRegression object with default parameters: {stepSize: 0.1, numIterations: 50, miniBatchFraction: 1.0}.
StreamingListener - org.apache.spark.streaming.scheduler中的接口
:: DeveloperApi :: A listener interface for receiving information about an ongoing streaming computation.
StreamingListenerBatchCompleted - org.apache.spark.streaming.scheduler中的类
 
StreamingListenerBatchCompleted(BatchInfo) - 类 的构造器org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
 
StreamingListenerBatchStarted - org.apache.spark.streaming.scheduler中的类
 
StreamingListenerBatchStarted(BatchInfo) - 类 的构造器org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
 
StreamingListenerBatchSubmitted - org.apache.spark.streaming.scheduler中的类
 
StreamingListenerBatchSubmitted(BatchInfo) - 类 的构造器org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
 
StreamingListenerEvent - org.apache.spark.streaming.scheduler中的接口
:: DeveloperApi :: Base trait for events related to StreamingListener
StreamingListenerOutputOperationCompleted - org.apache.spark.streaming.scheduler中的类
 
StreamingListenerOutputOperationCompleted(OutputOperationInfo) - 类 的构造器org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
 
StreamingListenerOutputOperationStarted - org.apache.spark.streaming.scheduler中的类
 
StreamingListenerOutputOperationStarted(OutputOperationInfo) - 类 的构造器org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
 
StreamingListenerReceiverError - org.apache.spark.streaming.scheduler中的类
 
StreamingListenerReceiverError(ReceiverInfo) - 类 的构造器org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
 
StreamingListenerReceiverStarted - org.apache.spark.streaming.scheduler中的类
 
StreamingListenerReceiverStarted(ReceiverInfo) - 类 的构造器org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
 
StreamingListenerReceiverStopped - org.apache.spark.streaming.scheduler中的类
 
StreamingListenerReceiverStopped(ReceiverInfo) - 类 的构造器org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
 
StreamingListenerStreamingStarted - org.apache.spark.streaming.scheduler中的类
 
StreamingListenerStreamingStarted(long) - 类 的构造器org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
 
StreamingLogisticRegressionWithSGD - org.apache.spark.mllib.classification中的类
Train or predict a logistic regression model on streaming data.
StreamingLogisticRegressionWithSGD() - 类 的构造器org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
Construct a StreamingLogisticRegression object with default parameters: {stepSize: 0.1, numIterations: 50, miniBatchFraction: 1.0, regParam: 0.0}.
StreamingQuery - org.apache.spark.sql.streaming中的接口
A handle to a query that is executing continuously in the background as new data arrives.
StreamingQueryException - org.apache.spark.sql.streaming中的异常错误
Exception that stopped a StreamingQuery.
StreamingQueryListener - org.apache.spark.sql.streaming中的类
Interface for listening to events related to StreamingQueries.
StreamingQueryListener() - 类 的构造器org.apache.spark.sql.streaming.StreamingQueryListener
 
StreamingQueryListener.Event - org.apache.spark.sql.streaming中的接口
Base type of StreamingQueryListener events
StreamingQueryListener.QueryProgressEvent - org.apache.spark.sql.streaming中的类
Event representing any progress updates in a query.
StreamingQueryListener.QueryStartedEvent - org.apache.spark.sql.streaming中的类
Event representing the start of a query param: id A unique query id that persists across restarts.
StreamingQueryListener.QueryTerminatedEvent - org.apache.spark.sql.streaming中的类
Event representing that termination of a query.
StreamingQueryManager - org.apache.spark.sql.streaming中的类
A class to manage all the StreamingQuery active in a SparkSession.
StreamingQueryProgress - org.apache.spark.sql.streaming中的类
Information about progress made in the execution of a StreamingQuery during a trigger.
StreamingQueryStatus - org.apache.spark.sql.streaming中的类
Reports information about the instantaneous status of a streaming query.
StreamingStatistics - org.apache.spark.status.api.v1.streaming中的类
 
StreamingTest - org.apache.spark.mllib.stat.test中的类
Performs online 2-sample significance testing for a stream of (Boolean, Double) pairs.
StreamingTest() - 类 的构造器org.apache.spark.mllib.stat.test.StreamingTest
 
StreamingTestMethod - org.apache.spark.mllib.stat.test中的接口
Significance testing methods for StreamingTest.
StreamingWrite - org.apache.spark.sql.connector.write.streaming中的接口
An interface that defines how to write the data to data source in streaming queries.
StreamInputInfo - org.apache.spark.streaming.scheduler中的类
:: DeveloperApi :: Track the information of input stream at specified batch time.
StreamInputInfo(int, long, Map<String, Object>) - 类 的构造器org.apache.spark.streaming.scheduler.StreamInputInfo
 
streamName() - 类 中的方法org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
streams() - 类 中的方法org.apache.spark.sql.SparkSession
Returns a StreamingQueryManager that allows managing all the StreamingQuerys active on this.
streams() - 类 中的方法org.apache.spark.sql.SQLContext
Returns a StreamingQueryManager that allows managing all the StreamingQueries active on this context.
StreamSinkProvider - org.apache.spark.sql.sources中的接口
::Experimental:: Implemented by objects that can produce a streaming Sink for a specific format or system.
StreamSourceProvider - org.apache.spark.sql.sources中的接口
::Experimental:: Implemented by objects that can produce a streaming Source for a specific format or system.
STRING() - 类 中的静态方法org.apache.spark.api.r.SerializationFormats
 
string() - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type string.
STRING() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable string type.
StringArrayParam - org.apache.spark.ml.param中的类
:: DeveloperApi :: Specialized version of Param[Array[String} for Java.
StringArrayParam(Params, String, String, Function1<String[], Object>) - 类 的构造器org.apache.spark.ml.param.StringArrayParam
 
StringArrayParam(Params, String, String) - 类 的构造器org.apache.spark.ml.param.StringArrayParam
 
StringContains - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to a string that contains the string value.
StringContains(String, String) - 类 的构造器org.apache.spark.sql.sources.StringContains
 
StringEndsWith - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to a string that ends with value.
StringEndsWith(String, String) - 类 的构造器org.apache.spark.sql.sources.StringEndsWith
 
stringHalfWidth(String) - 类 中的静态方法org.apache.spark.util.Utils
Return the number of half widths in a given string.
StringIndexer - org.apache.spark.ml.feature中的类
A label indexer that maps string column(s) of labels to ML column(s) of label indices.
StringIndexer(String) - 类 的构造器org.apache.spark.ml.feature.StringIndexer
 
StringIndexer() - 类 的构造器org.apache.spark.ml.feature.StringIndexer
 
StringIndexerAggregator - org.apache.spark.ml.feature中的类
A SQL Aggregator used by StringIndexer to count labels in string columns during fitting.
StringIndexerAggregator(int) - 类 的构造器org.apache.spark.ml.feature.StringIndexerAggregator
 
StringIndexerBase - org.apache.spark.ml.feature中的接口
Base trait for StringIndexer and StringIndexerModel.
StringIndexerModel - org.apache.spark.ml.feature中的类
Model fitted by StringIndexer.
StringIndexerModel(String, String[][]) - 类 的构造器org.apache.spark.ml.feature.StringIndexerModel
 
StringIndexerModel(String, String[]) - 类 的构造器org.apache.spark.ml.feature.StringIndexerModel
 
StringIndexerModel(String[]) - 类 的构造器org.apache.spark.ml.feature.StringIndexerModel
 
StringIndexerModel(String[][]) - 类 的构造器org.apache.spark.ml.feature.StringIndexerModel
 
stringIndexerOrderType() - 类 中的方法org.apache.spark.ml.feature.RFormula
 
stringIndexerOrderType() - 接口 中的方法org.apache.spark.ml.feature.RFormulaBase
Param for how to order categories of a string FEATURE column used by StringIndexer.
stringIndexerOrderType() - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
stringOrderType() - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
stringOrderType() - 接口 中的方法org.apache.spark.ml.feature.StringIndexerBase
Param for how to order labels of string column.
stringOrderType() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
StringRRDD<T> - org.apache.spark.api.r中的类
An RDD that stores R objects as Array[String].
StringRRDD(RDD<T>, byte[], String, byte[], Object[], ClassTag<T>) - 类 的构造器org.apache.spark.api.r.StringRRDD
 
StringStartsWith - org.apache.spark.sql.sources中的类
A filter that evaluates to true iff the attribute evaluates to a string that starts with value.
StringStartsWith(String, String) - 类 的构造器org.apache.spark.sql.sources.StringStartsWith
 
StringToColumn(StringContext) - 类 的构造器org.apache.spark.sql.SQLImplicits.StringToColumn
 
stringToSeq(String, Function1<String, T>) - 类 中的静态方法org.apache.spark.internal.config.ConfigHelpers
 
stringToSeq(String) - 类 中的静态方法org.apache.spark.util.Utils
 
StringType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the StringType object.
StringType - org.apache.spark.sql.types中的类
The data type representing String values.
StringType() - 类 的构造器org.apache.spark.sql.types.StringType
 
stronglyConnectedComponents(int) - 类 中的方法org.apache.spark.graphx.GraphOps
Compute the strongly connected component (SCC) of each vertex and return a graph with the vertex value containing the lowest vertex id in the SCC containing that vertex.
StronglyConnectedComponents - org.apache.spark.graphx.lib中的类
Strongly connected components algorithm implementation.
StronglyConnectedComponents() - 类 的构造器org.apache.spark.graphx.lib.StronglyConnectedComponents
 
struct(Seq<StructField>) - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type struct.
struct(StructType) - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type struct.
struct(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new struct column.
struct(String, String...) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new struct column that composes multiple input columns.
struct(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new struct column.
struct(String, Seq<String>) - 类 中的静态方法org.apache.spark.sql.functions
Creates a new struct column that composes multiple input columns.
StructField - org.apache.spark.sql.types中的类
A field inside a StructType.
StructField(String, DataType, boolean, Metadata) - 类 的构造器org.apache.spark.sql.types.StructField
 
StructType - org.apache.spark.sql.types中的类
A StructType object can be constructed by StructType(fields: Seq[StructField]) For a StructType object, one or multiple StructFields can be extracted by names.
StructType(StructField[]) - 类 的构造器org.apache.spark.sql.types.StructType
 
StructType() - 类 的构造器org.apache.spark.sql.types.StructType
No-arg constructor for kryo.
stsCredentials(String, String) - 类 中的方法org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
Use STS to assume an IAM role for temporary session-based authentication.
stsCredentials(String, String, String) - 类 中的方法org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
Use STS to assume an IAM role for temporary session-based authentication.
StudentTTest - org.apache.spark.mllib.stat.test中的类
Performs Students's 2-sample t-test.
StudentTTest() - 类 的构造器org.apache.spark.mllib.stat.test.StudentTTest
 
subgraph(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - 类 中的方法org.apache.spark.graphx.Graph
Restricts the graph to only the vertices and edges satisfying the predicates.
subgraph(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
submissionTime() - 类 中的方法org.apache.spark.scheduler.StageInfo
When this stage was submitted from the DAGScheduler to a TaskScheduler.
submissionTime() - 接口 中的方法org.apache.spark.SparkStageInfo
 
submissionTime() - 类 中的方法org.apache.spark.SparkStageInfoImpl
 
submissionTime() - 类 中的方法org.apache.spark.status.api.v1.JobData
 
submissionTime() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
submissionTime() - 类 中的方法org.apache.spark.status.LiveJob
 
submissionTime() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
 
submitJob(RDD<T>, Function1<Iterator<T>, U>, Seq<Object>, Function2<Object, U, BoxedUnit>, Function0<R>) - 接口 中的方法org.apache.spark.JobSubmitter
Submit a job for execution and return a FutureAction holding the result.
submitJob(RDD<T>, Function1<Iterator<T>, U>, Seq<Object>, Function2<Object, U, BoxedUnit>, Function0<R>) - 类 中的方法org.apache.spark.SparkContext
Submit a job for execution and return a FutureJob holding the result.
submitTasks(TaskSet) - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
 
subModels() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
subModels() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
subsamplingRate() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
subsamplingRate() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
subsamplingRate() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
subsamplingRate() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
subsamplingRate() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
subsamplingRate() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
subsamplingRate() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
For Online optimizer only: optimizer = "online".
subsamplingRate() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
subsamplingRate() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
subsamplingRate() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
subsamplingRate() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
subsamplingRate() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleParams
Fraction of the training data used for learning each decision tree, in range (0, 1].
subsamplingRate() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
subsetAccuracy() - 类 中的方法org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns subset accuracy (for equal sets of labels)
substituteAppId(String, String) - 类 中的静态方法org.apache.spark.util.Utils
Replaces all the {{APP_ID}} occurrences with the App Id.
substituteAppNExecIds(String, String, String) - 类 中的静态方法org.apache.spark.util.Utils
Replaces all the {{EXECUTOR_ID}} occurrences with the Executor Id and {{APP_ID}} occurrences with the App Id.
substr(Column, Column) - 类 中的方法org.apache.spark.sql.Column
An expression that returns a substring.
substr(int, int) - 类 中的方法org.apache.spark.sql.Column
An expression that returns a substring.
substring(Column, int, int) - 类 中的静态方法org.apache.spark.sql.functions
Substring starts at pos and is of length len when str is String type or returns the slice of byte array that starts at pos in byte and is of length len when str is Binary type
substring_index(Column, String, int) - 类 中的静态方法org.apache.spark.sql.functions
Returns the substring from string str before count occurrences of the delimiter delim.
subtract(JavaDoubleRDD) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return an RDD with the elements from this that are not in other.
subtract(JavaDoubleRDD, int) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return an RDD with the elements from this that are not in other.
subtract(JavaDoubleRDD, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return an RDD with the elements from this that are not in other.
subtract(JavaPairRDD<K, V>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD with the elements from this that are not in other.
subtract(JavaPairRDD<K, V>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD with the elements from this that are not in other.
subtract(JavaPairRDD<K, V>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD with the elements from this that are not in other.
subtract(JavaRDD<T>) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return an RDD with the elements from this that are not in other.
subtract(JavaRDD<T>, int) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return an RDD with the elements from this that are not in other.
subtract(JavaRDD<T>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return an RDD with the elements from this that are not in other.
subtract(Term) - 类 中的静态方法org.apache.spark.ml.feature.Dot
 
subtract(Term) - 类 中的静态方法org.apache.spark.ml.feature.EmptyTerm
 
subtract(Term) - 接口 中的方法org.apache.spark.ml.feature.Term
Fold by adding deletion terms to the left.
subtract(BlockMatrix) - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Subtracts the given block matrix other from this block matrix: this - other.
subtract(RDD<T>) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD with the elements from this that are not in other.
subtract(RDD<T>, int) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD with the elements from this that are not in other.
subtract(RDD<T>, Partitioner, Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Return an RDD with the elements from this that are not in other.
subtract(long, long) - 类 中的静态方法org.apache.spark.streaming.util.RawTextHelper
 
subtractByKey(JavaPairRDD<K, W>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD with the pairs from this whose keys are not in other.
subtractByKey(JavaPairRDD<K, W>, int) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD with the pairs from this whose keys are not in other.
subtractByKey(JavaPairRDD<K, W>, Partitioner) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD with the pairs from this whose keys are not in other.
subtractByKey(RDD<Tuple2<K, W>>, ClassTag<W>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return an RDD with the pairs from this whose keys are not in other.
subtractByKey(RDD<Tuple2<K, W>>, int, ClassTag<W>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return an RDD with the pairs from this whose keys are not in other.
subtractByKey(RDD<Tuple2<K, W>>, Partitioner, ClassTag<W>) - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return an RDD with the pairs from this whose keys are not in other.
subtractMetrics(TaskMetrics, TaskMetrics) - 类 中的静态方法org.apache.spark.status.LiveEntityHelpers
Subtract m2 values from m1.
succeededTasks() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
succeededTasks() - 类 中的方法org.apache.spark.status.LiveExecutorStageSummary
 
Success() - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
success(T) - 类 中的静态方法org.apache.spark.ml.feature.RFormulaParser
 
Success - org.apache.spark中的类
:: DeveloperApi :: Task succeeded.
Success() - 类 的构造器org.apache.spark.Success
 
successful() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
sum() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Add up the elements in this RDD.
Sum() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
sum() - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Add up the elements in this RDD.
sum(MapFunction<T, Double>) - 类 中的静态方法org.apache.spark.sql.expressions.javalang.typed
已过时。
Sum aggregate function for floating point (double) type.
sum(Function1<IN, Object>) - 类 中的静态方法org.apache.spark.sql.expressions.scalalang.typed
已过时。
Sum aggregate function for floating point (double) type.
sum(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the sum of all values in the expression.
sum(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the sum of all values in the given column.
sum(String...) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the sum for each numeric columns for each group.
sum(Seq<String>) - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
Compute the sum for each numeric columns for each group.
sum() - 类 中的方法org.apache.spark.util.DoubleAccumulator
Returns the sum of elements added to the accumulator.
sum() - 类 中的方法org.apache.spark.util.LongAccumulator
Returns the sum of elements added to the accumulator.
sum() - 类 中的方法org.apache.spark.util.StatCounter
 
sumApprox(long, Double) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Approximate operation to return the sum within a timeout.
sumApprox(long) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Approximate operation to return the sum within a timeout.
sumApprox(long, double) - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Approximate operation to return the sum within a timeout.
sumDistinct(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the sum of distinct values in the expression.
sumDistinct(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the sum of distinct values in the expression.
sumLong(MapFunction<T, Long>) - 类 中的静态方法org.apache.spark.sql.expressions.javalang.typed
已过时。
Sum aggregate function for integral (long, i.e. 64 bit integer) type.
sumLong(Function1<IN, Object>) - 类 中的静态方法org.apache.spark.sql.expressions.scalalang.typed
已过时。
Sum aggregate function for integral (long, i.e. 64 bit integer) type.
Summarizer - org.apache.spark.ml.stat中的类
Tools for vectorized statistics on MLlib Vectors.
Summarizer() - 类 的构造器org.apache.spark.ml.stat.Summarizer
 
summary() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
Gets summary of model on training set.
summary() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
Gets summary of model on training set.
summary() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
Gets summary of model on training set.
summary() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
Gets summary of model on training set.
summary() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
Gets R-like summary of model on training set.
summary() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
Gets summary (e.g. residuals, mse, r-squared ) of model on training set.
summary(Column, Column) - 类 中的方法org.apache.spark.ml.stat.SummaryBuilder
Returns an aggregate object that contains the summary of the column with the requested metrics.
summary(Column) - 类 中的方法org.apache.spark.ml.stat.SummaryBuilder
 
summary() - 接口 中的方法org.apache.spark.ml.util.HasTrainingSummary
Gets summary of model on training set.
summary(String...) - 类 中的方法org.apache.spark.sql.Dataset
Computes specified statistics for numeric and string columns.
summary(Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Computes specified statistics for numeric and string columns.
SummaryBuilder - org.apache.spark.ml.stat中的类
A builder object that provides summary statistics about a given column.
SummaryBuilder() - 类 的构造器org.apache.spark.ml.stat.SummaryBuilder
 
supportColumnarReads(InputPartition) - 接口 中的方法org.apache.spark.sql.connector.read.PartitionReaderFactory
Returns true if the given InputPartition should be read by Spark in a columnar way.
supportDataType(DataType) - 类 中的方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
supportedFeatureSubsetStrategies() - 类 中的静态方法org.apache.spark.ml.classification.RandomForestClassifier
Accessor for supported featureSubsetStrategy settings: auto, all, onethird, sqrt, log2
supportedFeatureSubsetStrategies() - 类 中的静态方法org.apache.spark.ml.regression.RandomForestRegressor
Accessor for supported featureSubsetStrategy settings: auto, all, onethird, sqrt, log2
supportedFeatureSubsetStrategies() - 类 中的静态方法org.apache.spark.mllib.tree.RandomForest
List of supported feature subset sampling strategies.
supportedImpurities() - 类 中的静态方法org.apache.spark.ml.classification.DecisionTreeClassifier
Accessor for supported impurities: entropy, gini
supportedImpurities() - 类 中的静态方法org.apache.spark.ml.classification.RandomForestClassifier
Accessor for supported impurity settings: entropy, gini
supportedImpurities() - 类 中的静态方法org.apache.spark.ml.regression.DecisionTreeRegressor
Accessor for supported impurities: variance
supportedImpurities() - 类 中的静态方法org.apache.spark.ml.regression.RandomForestRegressor
Accessor for supported impurity settings: variance
supportedLossTypes() - 类 中的静态方法org.apache.spark.ml.classification.GBTClassifier
Accessor for supported loss settings: logistic
supportedLossTypes() - 类 中的静态方法org.apache.spark.ml.regression.GBTRegressor
Accessor for supported loss settings: squared (L2), absolute (L1)
supportedOptimizers() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
supportedOptimizers() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
supportedOptimizers() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
Supported values for Param optimizer.
supportedSelectorTypes() - 类 中的静态方法org.apache.spark.mllib.feature.ChiSqSelector
Set of selector types that ChiSqSelector supports.
SupportsDelete - org.apache.spark.sql.connector.catalog中的接口
A mix-in interface for Table delete support.
SupportsDynamicOverwrite - org.apache.spark.sql.connector.write中的接口
Write builder trait for tables that support dynamic partition overwrite.
SupportsNamespaces - org.apache.spark.sql.connector.catalog中的接口
Catalog methods for working with namespaces.
SupportsOverwrite - org.apache.spark.sql.connector.write中的接口
Write builder trait for tables that support overwrite by filter.
SupportsPushDownFilters - org.apache.spark.sql.connector.read中的接口
A mix-in interface for ScanBuilder.
SupportsPushDownRequiredColumns - org.apache.spark.sql.connector.read中的接口
A mix-in interface for ScanBuilder.
SupportsRead - org.apache.spark.sql.connector.catalog中的接口
A mix-in interface of Table, to indicate that it's readable.
SupportsReportPartitioning - org.apache.spark.sql.connector.read中的接口
A mix in interface for Scan.
SupportsReportStatistics - org.apache.spark.sql.connector.read中的接口
A mix in interface for Scan.
SupportsTruncate - org.apache.spark.sql.connector.write中的接口
Write builder trait for tables that support truncation.
SupportsWrite - org.apache.spark.sql.connector.catalog中的接口
A mix-in interface of Table, to indicate that it's writable.
surrogateDF() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
SVDPlusPlus - org.apache.spark.graphx.lib中的类
Implementation of SVD++ algorithm.
SVDPlusPlus() - 类 的构造器org.apache.spark.graphx.lib.SVDPlusPlus
 
SVDPlusPlus.Conf - org.apache.spark.graphx.lib中的类
Configuration parameters for SVDPlusPlus.
SVMDataGenerator - org.apache.spark.mllib.util中的类
:: DeveloperApi :: Generate sample data used for SVM.
SVMDataGenerator() - 类 的构造器org.apache.spark.mllib.util.SVMDataGenerator
 
SVMModel - org.apache.spark.mllib.classification中的类
Model for Support Vector Machines (SVMs).
SVMModel(Vector, double) - 类 的构造器org.apache.spark.mllib.classification.SVMModel
 
SVMWithSGD - org.apache.spark.mllib.classification中的类
Train a Support Vector Machine (SVM) using Stochastic Gradient Descent.
SVMWithSGD() - 类 的构造器org.apache.spark.mllib.classification.SVMWithSGD
Construct a SVM object with default parameters: {stepSize: 1.0, numIterations: 100, regParm: 0.01, miniBatchFraction: 1.0}.
symbolToColumn(Symbol) - 类 中的方法org.apache.spark.sql.SQLImplicits
An implicit conversion that turns a Scala Symbol into a Column.
symlink(File, File) - 类 中的静态方法org.apache.spark.util.Utils
Creates a symlink.
symmetricEigs(Function1<DenseVector<Object>, DenseVector<Object>>, int, int, double, int) - 类 中的静态方法org.apache.spark.mllib.linalg.EigenValueDecomposition
Compute the leading k eigenvalues and eigenvectors on a symmetric square matrix using ARPACK.
syr(double, Vector, DenseMatrix) - 类 中的静态方法org.apache.spark.ml.linalg.BLAS
A := alpha * x * x^T^ + A
syr(double, Vector, DenseMatrix) - 类 中的静态方法org.apache.spark.mllib.linalg.BLAS
A := alpha * x * x^T^ + A
SYSTEM_DEFAULT() - 类 中的静态方法org.apache.spark.sql.types.DecimalType
 
systemProperties() - 类 中的方法org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
 

T

t() - 类 中的方法org.apache.spark.SerializableWritable
 
Table - org.apache.spark.sql.catalog中的类
A table in Spark, as returned by the listTables method in Catalog.
Table(String, String, String, String, boolean) - 类 的构造器org.apache.spark.sql.catalog.Table
 
Table - org.apache.spark.sql.connector.catalog中的接口
An interface representing a logical structured data set of a data source.
table(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Returns the specified table as a DataFrame.
table() - 类 中的方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
table(String) - 类 中的方法org.apache.spark.sql.SparkSession
Returns the specified table/view as a DataFrame.
table(String) - 类 中的方法org.apache.spark.sql.SQLContext
Returns the specified table as a DataFrame.
table(int) - 接口 中的方法org.apache.spark.ui.PagedTable
 
TABLE_CLASS_NOT_STRIPED() - 类 中的静态方法org.apache.spark.ui.UIUtils
 
TABLE_CLASS_STRIPED() - 类 中的静态方法org.apache.spark.ui.UIUtils
 
TABLE_CLASS_STRIPED_SORTABLE() - 类 中的静态方法org.apache.spark.ui.UIUtils
 
TableCapability - org.apache.spark.sql.connector.catalog中的枚举
Capabilities that can be provided by a Table implementation.
TableCatalog - org.apache.spark.sql.connector.catalog中的接口
Catalog methods for working with Tables.
TableChange - org.apache.spark.sql.connector.catalog中的接口
TableChange subclasses represent requested changes to a table.
TableChange.AddColumn - org.apache.spark.sql.connector.catalog中的类
A TableChange to add a field.
TableChange.ColumnChange - org.apache.spark.sql.connector.catalog中的接口
 
TableChange.DeleteColumn - org.apache.spark.sql.connector.catalog中的类
A TableChange to delete a field.
TableChange.RemoveProperty - org.apache.spark.sql.connector.catalog中的类
A TableChange to remove a table property.
TableChange.RenameColumn - org.apache.spark.sql.connector.catalog中的类
A TableChange to rename a field.
TableChange.SetProperty - org.apache.spark.sql.connector.catalog中的类
A TableChange to set a table property.
TableChange.UpdateColumnComment - org.apache.spark.sql.connector.catalog中的类
A TableChange to update the comment of a field.
TableChange.UpdateColumnType - org.apache.spark.sql.connector.catalog中的类
A TableChange to update the type of a field.
tableCssClass() - 接口 中的方法org.apache.spark.ui.PagedTable
 
tableDesc() - 接口 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectBase
 
tableDesc() - 类 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
tableDesc() - 类 中的方法org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 
tableExists(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Check if the table or view with the specified name exists.
tableExists(String, String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Check if the table or view with the specified name exists in the specified database.
tableExists(Identifier) - 类 中的方法org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
tableExists(Identifier) - 接口 中的方法org.apache.spark.sql.connector.catalog.TableCatalog
Test whether a table exists using an identifier from the catalog.
tableExists(String, String) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Return whether a table/view with the specified name exists.
tableId() - 接口 中的方法org.apache.spark.ui.PagedTable
 
tableIdentifier() - 接口 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectBase
 
tableNames() - 类 中的方法org.apache.spark.sql.SQLContext
Returns the names of tables in the current database as an array.
tableNames(String) - 类 中的方法org.apache.spark.sql.SQLContext
Returns the names of tables in the given database as an array.
tableProperty(String, String) - 接口 中的方法org.apache.spark.sql.CreateTableWriter
Add a table property.
tableProperty(String, String) - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
TableProvider - org.apache.spark.sql.connector.catalog中的接口
The base interface for v2 data sources which don't have a real catalog.
TableReader - org.apache.spark.sql.hive中的接口
A trait for subclasses that handle table scans.
tables() - 类 中的方法org.apache.spark.sql.SQLContext
Returns a DataFrame containing names of existing tables in the current database.
tables(String) - 类 中的方法org.apache.spark.sql.SQLContext
Returns a DataFrame containing names of existing tables in the given database.
TableScan - org.apache.spark.sql.sources中的接口
A BaseRelation that can produce all of its tuples as an RDD of Row objects.
tableType() - 类 中的方法org.apache.spark.sql.catalog.Table
 
take(int) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Take the first num elements of the RDD.
take(int) - 类 中的方法org.apache.spark.rdd.RDD
Take the first num elements of the RDD.
take(int) - 类 中的方法org.apache.spark.sql.Dataset
Returns the first n rows in the Dataset.
takeAsList(int) - 类 中的方法org.apache.spark.sql.Dataset
Returns the first n rows in the Dataset as a list.
takeAsync(int) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
The asynchronous version of the take action, which returns a future for retrieving the first num elements of this RDD.
takeAsync(int) - 类 中的方法org.apache.spark.rdd.AsyncRDDActions
Returns a future for retrieving the first num elements of the RDD.
takeOrdered(int, Comparator<T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Returns the first k (smallest) elements from this RDD as defined by the specified Comparator[T] and maintains the order.
takeOrdered(int) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Returns the first k (smallest) elements from this RDD using the natural ordering for T while maintain the order.
takeOrdered(int, Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Returns the first k (smallest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering.
takeSample(boolean, int) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
 
takeSample(boolean, int, long) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
 
takeSample(boolean, int, long) - 类 中的方法org.apache.spark.rdd.RDD
Return a fixed-size sampled subset of this RDD in an array
tallSkinnyQR(boolean) - 类 中的方法org.apache.spark.mllib.linalg.distributed.RowMatrix
Compute QR decomposition for RowMatrix.
tan(Column) - 类 中的静态方法org.apache.spark.sql.functions
 
tan(String) - 类 中的静态方法org.apache.spark.sql.functions
 
tanh(Column) - 类 中的静态方法org.apache.spark.sql.functions
 
tanh(String) - 类 中的静态方法org.apache.spark.sql.functions
 
targetStorageLevel() - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
targetStorageLevel() - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
task() - 类 中的方法org.apache.spark.CleanupTaskWeakReference
 
TASK_DESERIALIZATION_TIME() - 类 中的静态方法org.apache.spark.ui.jobs.TaskDetailsClassNames
 
TASK_DESERIALIZATION_TIME() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
TASK_INDEX() - 类 中的静态方法org.apache.spark.status.TaskIndexNames
 
TASK_TIME() - 类 中的静态方法org.apache.spark.ui.ToolTips
 
taskAttemptId() - 类 中的方法org.apache.spark.BarrierTaskContext
 
taskAttemptId() - 类 中的方法org.apache.spark.TaskContext
An ID that is unique to this task attempt (within the same SparkContext, no two task attempts will share the same attempt ID).
TaskCommitDenied - org.apache.spark中的类
:: DeveloperApi :: Task requested the driver to commit, but was denied.
TaskCommitDenied(int, int, int) - 类 的构造器org.apache.spark.TaskCommitDenied
 
TaskCommitMessage(Object) - 类 的构造器org.apache.spark.internal.io.FileCommitProtocol.TaskCommitMessage
 
TaskCompletionListener - org.apache.spark.util中的接口
:: DeveloperApi :: Listener providing a callback function to invoke when a task's execution completes.
TaskContext - org.apache.spark中的类
Contextual information about a task which can be read or mutated during execution.
TaskContext() - 类 的构造器org.apache.spark.TaskContext
 
TaskData - org.apache.spark.status.api.v1中的类
 
TaskDetailsClassNames - org.apache.spark.ui.jobs中的类
Names of the CSS classes corresponding to each type of task detail.
TaskDetailsClassNames() - 类 的构造器org.apache.spark.ui.jobs.TaskDetailsClassNames
 
taskEndFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
TaskEndReason - org.apache.spark中的接口
:: DeveloperApi :: Various possible reasons why a task ended.
taskEndReasonFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
taskEndReasonToJson(TaskEndReason) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
taskEndToJson(SparkListenerTaskEnd) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
taskExecutorMetrics() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskEnd
 
TaskFailedReason - org.apache.spark中的接口
:: DeveloperApi :: Various possible reasons why a task failed.
TaskFailureListener - org.apache.spark.util中的接口
:: DeveloperApi :: Listener providing a callback function to invoke when a task's execution encounters an error.
taskFailures() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
 
taskFailures() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
 
taskGettingResultFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
taskGettingResultToJson(SparkListenerTaskGettingResult) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
taskId() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
 
taskId() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
 
taskId() - 类 中的方法org.apache.spark.scheduler.local.KillTask
 
taskId() - 类 中的方法org.apache.spark.scheduler.local.StatusUpdate
 
taskId() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
taskId() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
taskId() - 类 中的方法org.apache.spark.storage.TaskResultBlockId
 
TaskIndexNames - org.apache.spark.status中的类
Tasks have a lot of indices that are used in a few different places.
TaskIndexNames() - 类 的构造器org.apache.spark.status.TaskIndexNames
 
taskInfo() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskEnd
 
taskInfo() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskGettingResult
 
taskInfo() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskStart
 
TaskInfo - org.apache.spark.scheduler中的类
:: DeveloperApi :: Information about a running task attempt inside a TaskSet.
TaskInfo(long, int, int, long, String, String, Enumeration.Value, boolean) - 类 的构造器org.apache.spark.scheduler.TaskInfo
 
taskInfoFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
taskInfoToJson(TaskInfo) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
TaskKilled - org.apache.spark中的类
:: DeveloperApi :: Task was killed intentionally and needs to be rescheduled.
TaskKilled(String, Seq<AccumulableInfo>, Seq<AccumulatorV2<?, ?>>, Seq<Object>) - 类 的构造器org.apache.spark.TaskKilled
 
TaskKilledException - org.apache.spark中的异常错误
:: DeveloperApi :: Exception thrown when a task is explicitly killed (i.e., task failure is expected).
TaskKilledException(String) - 异常错误 的构造器org.apache.spark.TaskKilledException
 
TaskKilledException() - 异常错误 的构造器org.apache.spark.TaskKilledException
 
taskLocality() - 类 中的方法org.apache.spark.scheduler.TaskInfo
 
TaskLocality - org.apache.spark.scheduler中的类
 
TaskLocality() - 类 的构造器org.apache.spark.scheduler.TaskLocality
 
taskLocality() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
TaskLocation - org.apache.spark.scheduler中的接口
A location where a task should run.
TaskMetricDistributions - org.apache.spark.status.api.v1中的类
 
taskMetrics() - 类 中的方法org.apache.spark.BarrierTaskContext
 
taskMetrics() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskEnd
 
taskMetrics() - 类 中的方法org.apache.spark.scheduler.StageInfo
 
taskMetrics() - 类 中的方法org.apache.spark.status.api.v1.TaskData
 
TaskMetrics - org.apache.spark.status.api.v1中的类
 
taskMetrics() - 类 中的方法org.apache.spark.TaskContext
 
taskMetricsFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
taskMetricsToJson(TaskMetrics) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
TaskResult<T> - org.apache.spark.scheduler中的接口
 
TASKRESULT() - 类 中的静态方法org.apache.spark.storage.BlockId
 
TaskResultBlockId - org.apache.spark.storage中的类
 
TaskResultBlockId(long) - 类 的构造器org.apache.spark.storage.TaskResultBlockId
 
TaskResultLost - org.apache.spark中的类
:: DeveloperApi :: The task finished successfully, but the result was lost from the executor's block manager before it was fetched.
TaskResultLost() - 类 的构造器org.apache.spark.TaskResultLost
 
tasks() - 类 中的方法org.apache.spark.status.api.v1.StageData
 
TaskScheduler - org.apache.spark.scheduler中的接口
Low-level task scheduler interface, currently implemented exclusively by TaskSchedulerImpl.
TaskSchedulerIsSet - org.apache.spark中的类
An event that SparkContext uses to notify HeartbeatReceiver that SparkContext.taskScheduler is created.
TaskSchedulerIsSet() - 类 的构造器org.apache.spark.TaskSchedulerIsSet
 
TaskSorting - org.apache.spark.status.api.v1中的枚举
 
taskStartFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
taskStartToJson(SparkListenerTaskStart) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
TaskState - org.apache.spark中的类
 
TaskState() - 类 的构造器org.apache.spark.TaskState
 
taskSucceeded(int, Object) - 接口 中的方法org.apache.spark.scheduler.JobListener
 
taskTime() - 类 中的方法org.apache.spark.status.api.v1.ExecutorStageSummary
 
taskTime() - 类 中的方法org.apache.spark.status.LiveExecutorStageSummary
 
taskType() - 类 中的方法org.apache.spark.scheduler.SparkListenerTaskEnd
 
TEMP_DIR_SHUTDOWN_PRIORITY() - 类 中的静态方法org.apache.spark.util.ShutdownHookManager
The shutdown priority of temp directory must be lower than the SparkContext shutdown priority.
TEMP_LOCAL() - 类 中的静态方法org.apache.spark.storage.BlockId
 
TEMP_SHUFFLE() - 类 中的静态方法org.apache.spark.storage.BlockId
 
tempFileWith(File) - 类 中的静态方法org.apache.spark.util.Utils
Returns a path of temporary file which is in the same directory with path.
TeradataDialect - org.apache.spark.sql.jdbc中的类
 
TeradataDialect() - 类 的构造器org.apache.spark.sql.jdbc.TeradataDialect
 
Term - org.apache.spark.ml.feature中的接口
R formula terms.
terminateProcess(Process, long) - 类 中的静态方法org.apache.spark.util.Utils
Terminates a process waiting for at most the specified duration.
test(Dataset<Row>, String, String) - 类 中的静态方法org.apache.spark.ml.stat.ChiSquareTest
Conduct Pearson's independence test for every feature against the label.
test(Dataset<?>, String, String, double...) - 类 中的静态方法org.apache.spark.ml.stat.KolmogorovSmirnovTest
Convenience function to conduct a one-sample, two-sided Kolmogorov-Smirnov test for probability distribution equality.
test(Dataset<?>, String, Function1<Object, Object>) - 类 中的静态方法org.apache.spark.ml.stat.KolmogorovSmirnovTest
 
test(Dataset<?>, String, Function<Double, Double>) - 类 中的静态方法org.apache.spark.ml.stat.KolmogorovSmirnovTest
 
test(Dataset<?>, String, String, Seq<Object>) - 类 中的静态方法org.apache.spark.ml.stat.KolmogorovSmirnovTest
 
TEST() - 类 中的静态方法org.apache.spark.storage.BlockId
 
TEST_ACCUM() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
TEST_MEMORY() - 类 中的静态方法org.apache.spark.internal.config.Tests
 
TEST_N_CORES_EXECUTOR() - 类 中的静态方法org.apache.spark.internal.config.Tests
 
TEST_N_EXECUTORS_HOST() - 类 中的静态方法org.apache.spark.internal.config.Tests
 
TEST_N_HOSTS() - 类 中的静态方法org.apache.spark.internal.config.Tests
 
TEST_NO_STAGE_RETRY() - 类 中的静态方法org.apache.spark.internal.config.Tests
 
TEST_RESERVED_MEMORY() - 类 中的静态方法org.apache.spark.internal.config.Tests
 
TEST_SCHEDULE_INTERVAL() - 类 中的静态方法org.apache.spark.internal.config.Tests
 
TEST_USE_COMPRESSED_OOPS_KEY() - 类 中的静态方法org.apache.spark.internal.config.Tests
 
testCommandAvailable(String) - 类 中的静态方法org.apache.spark.TestUtils
Test if a command is available.
testOneSample(RDD<Object>, String, double...) - 类 中的静态方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
A convenience function that allows running the KS test for 1 set of sample data against a named distribution
testOneSample(RDD<Object>, Function1<Object, Object>) - 类 中的静态方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
 
testOneSample(RDD<Object>, RealDistribution) - 类 中的静态方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
 
testOneSample(RDD<Object>, String, Seq<Object>) - 类 中的静态方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
 
TestResult<DF> - org.apache.spark.mllib.stat.test中的接口
Trait for hypothesis test results.
Tests - org.apache.spark.internal.config中的类
 
Tests() - 类 的构造器org.apache.spark.internal.config.Tests
 
TestUtils - org.apache.spark中的类
Utilities for tests.
TestUtils() - 类 的构造器org.apache.spark.TestUtils
 
text(String...) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads text files and returns a DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any.
text(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads text files and returns a DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any.
text(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads text files and returns a DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any.
text(String) - 类 中的方法org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame in a text file at the specified path.
text(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Loads text files and returns a DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any.
textFile(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
textFile(String, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
textFile(String, int) - 类 中的方法org.apache.spark.SparkContext
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
textFile(String...) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads text files and returns a Dataset of String.
textFile(String) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads text files and returns a Dataset of String.
textFile(Seq<String>) - 类 中的方法org.apache.spark.sql.DataFrameReader
Loads text files and returns a Dataset of String.
textFile(String) - 类 中的方法org.apache.spark.sql.streaming.DataStreamReader
Loads text file(s) and returns a Dataset of String.
textFileStream(String) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as text files (using key as LongWritable, value as Text and input format as TextInputFormat).
textFileStream(String) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as text files (using key as LongWritable, value as Text and input format as TextInputFormat).
textResponderToServlet(Function1<HttpServletRequest, String>) - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
thenComparing(Comparator<? super T>) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
thenComparing(Function<? super T, ? extends U>) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
thenComparing(Comparator<? super T>) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
thenComparing(Function<? super T, ? extends U>) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
thenComparing(Comparator<? super T>) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
thenComparing(Function<? super T, ? extends U>) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
thenComparing(Comparator<? super T>) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
thenComparing(Function<? super T, ? extends U>) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
thenComparing(Comparator<? super T>) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
thenComparing(Function<? super T, ? extends U>) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
thenComparing(Comparator<? super T>) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
thenComparing(Function<? super T, ? extends U>) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
thenComparing(Comparator<? super T>) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
thenComparing(Function<? super T, ? extends U>) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
thenComparingDouble(ToDoubleFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
thenComparingDouble(ToDoubleFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
thenComparingDouble(ToDoubleFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
thenComparingDouble(ToDoubleFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
thenComparingDouble(ToDoubleFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
thenComparingDouble(ToDoubleFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
thenComparingDouble(ToDoubleFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
thenComparingInt(ToIntFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
thenComparingInt(ToIntFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
thenComparingInt(ToIntFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
thenComparingInt(ToIntFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
thenComparingInt(ToIntFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
thenComparingInt(ToIntFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
thenComparingInt(ToIntFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
thenComparingLong(ToLongFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
thenComparingLong(ToLongFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
thenComparingLong(ToLongFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
thenComparingLong(ToLongFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
thenComparingLong(ToLongFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
thenComparingLong(ToLongFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
thenComparingLong(ToLongFunction<? super T>) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
theta() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
theta() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
 
theta() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
 
theta() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel
 
thisClassName() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
Hard-code class name string in case it changes in the future
thisClassName() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
Hard-code class name string in case it changes in the future
thisClassName() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
 
thisFormatVersion() - 类 中的方法org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
 
thisFormatVersion() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
 
thisFormatVersion() - 类 中的方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
 
thisFormatVersion() - 类 中的方法org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
 
thisFormatVersion() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
 
threadCount() - 接口 中的方法org.apache.spark.rpc.IsolatedRpcEndpoint
How many threads to use for delivering messages.
threadId() - 类 中的方法org.apache.spark.status.api.v1.ThreadStackTrace
 
threadName() - 类 中的方法org.apache.spark.status.api.v1.ThreadStackTrace
 
ThreadSafeRpcEndpoint - org.apache.spark.rpc中的接口
A trait that requires RpcEnv thread-safely sending messages to it.
ThreadStackTrace - org.apache.spark.status.api.v1中的类
 
ThreadStackTrace(long, String, Thread.State, StackTrace, Option<Object>, String, Seq<String>) - 类 的构造器org.apache.spark.status.api.v1.ThreadStackTrace
 
threadState() - 类 中的方法org.apache.spark.status.api.v1.ThreadStackTrace
 
ThreadUtils - org.apache.spark.util中的类
 
ThreadUtils() - 类 的构造器org.apache.spark.util.ThreadUtils
 
threshold() - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
threshold() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
threshold() - 接口 中的方法org.apache.spark.ml.classification.LinearSVCParams
Param for threshold in binary classification prediction.
threshold() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
threshold() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
threshold() - 类 中的方法org.apache.spark.ml.feature.Binarizer
Param for threshold used to binarize continuous features.
threshold() - 接口 中的方法org.apache.spark.ml.param.shared.HasThreshold
Param for threshold in binary classification prediction, in range [0, 1].
threshold() - 类 中的方法org.apache.spark.ml.tree.ContinuousSplit
 
threshold() - 类 中的方法org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
 
threshold() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
threshold() - 类 中的方法org.apache.spark.mllib.tree.model.Split
 
thresholds() - 类 中的方法org.apache.spark.ml.classification.ProbabilisticClassificationModel
 
thresholds() - 类 中的方法org.apache.spark.ml.classification.ProbabilisticClassifier
 
thresholds() - 类 中的方法org.apache.spark.ml.feature.Binarizer
Array of threshold used to binarize continuous features.
thresholds() - 接口 中的方法org.apache.spark.ml.param.shared.HasThresholds
Param for Thresholds in multi-class classification to adjust the probability of predicting each class.
thresholds() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Returns thresholds in descending order.
throughOrigin() - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
param for whether the regression is through the origin.
throwBalls(int, RDD<?>, double, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations) - 类 中的方法org.apache.spark.rdd.DefaultPartitionCoalescer
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerApplicationEnd
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerApplicationStart
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorAdded
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorRemoved
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerJobEnd
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerJobStart
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeBlacklisted
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
 
time() - 类 中的方法org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
 
time(Function0<T>) - 类 中的方法org.apache.spark.sql.SparkSession
Executes some code block and prints to stdout the time taken to execute the block.
time() - 异常错误 中的方法org.apache.spark.sql.streaming.StreamingQueryException
Time when the exception occurred
time() - 类 中的方法org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
 
Time - org.apache.spark.streaming中的类
This is a simple class that represents an absolute instant of time.
Time(long) - 类 的构造器org.apache.spark.streaming.Time
 
timeFromString(String, TimeUnit) - 类 中的静态方法org.apache.spark.internal.config.ConfigHelpers
 
timeIt(int, Function0<BoxedUnit>, Option<Function0<BoxedUnit>>) - 类 中的静态方法org.apache.spark.util.Utils
Timing method based on iterations that permit JVM JIT optimization.
timeout(Duration) - 类 中的方法org.apache.spark.streaming.StateSpec
Set the duration after which the state of an idle key will be removed.
TIMER() - 类 中的静态方法org.apache.spark.metrics.sink.StatsdMetricType
 
times(byte, byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
times(Decimal, Decimal) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
times(Decimal, Decimal) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
times(double, double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
times(float, float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
times(int, int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
times(long, long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
times(short, short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
times(int) - 类 中的方法org.apache.spark.streaming.Duration
 
times(int, Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.util.Utils
Method executed for repeating a task for side effects.
timestamp() - 类 中的方法org.apache.spark.sql.ColumnName
Creates a new StructField of type timestamp.
TIMESTAMP() - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for nullable timestamp type.
timestamp() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
TimestampType - 类 中的静态变量org.apache.spark.sql.types.DataTypes
Gets the TimestampType object.
TimestampType - org.apache.spark.sql.types中的类
The timestamp type represents a time instant in microsecond precision.
TimestampType() - 类 的构造器org.apache.spark.sql.types.TimestampType
 
timeStringAsMs(String) - 类 中的静态方法org.apache.spark.util.Utils
Convert a time parameter such as (50s, 100ms, or 250us) to milliseconds for internal use.
timeStringAsSeconds(String) - 类 中的静态方法org.apache.spark.util.Utils
Convert a time parameter such as (50s, 100ms, or 250us) to seconds for internal use.
timeTakenMs(Function0<T>) - 类 中的静态方法org.apache.spark.util.Utils
Records the duration of running `body`.
timeToString(long, TimeUnit) - 类 中的静态方法org.apache.spark.internal.config.ConfigHelpers
 
TimeTrackingOutputStream - org.apache.spark.storage中的类
Intercepts write calls and tracks total time spent writing in order to update shuffle write metrics.
TimeTrackingOutputStream(ShuffleWriteMetricsReporter, OutputStream) - 类 的构造器org.apache.spark.storage.TimeTrackingOutputStream
 
timeUnit() - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
 
TIMING_DATA() - 类 中的静态方法org.apache.spark.api.r.SpecialLengths
 
to(Time, Duration) - 类 中的方法org.apache.spark.streaming.Time
 
to_csv(Column, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Java-specific) Converts a column containing a StructType into a CSV string with the specified schema.
to_csv(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts a column containing a StructType into a CSV string with the specified schema.
to_date(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts the column into DateType by casting rules to DateType.
to_date(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Converts the column into a DateType with a specified format See DateTimeFormatter for valid date and time format patterns
to_json(Column, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Scala-specific) Converts a column containing a StructType, ArrayType or a MapType into a JSON string with the specified schema.
to_json(Column, Map<String, String>) - 类 中的静态方法org.apache.spark.sql.functions
(Java-specific) Converts a column containing a StructType, ArrayType or a MapType into a JSON string with the specified schema.
to_json(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts a column containing a StructType, ArrayType or a MapType into a JSON string with the specified schema.
to_timestamp(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts to a timestamp by casting rules to TimestampType.
to_timestamp(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Converts time string with the given pattern to timestamp.
to_utc_timestamp(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
已过时。
This function is deprecated and will be removed in future versions. Since 3.0.0.
to_utc_timestamp(Column, Column) - 类 中的静态方法org.apache.spark.sql.functions
已过时。
This function is deprecated and will be removed in future versions. Since 3.0.0.
toApacheCommonsStats(StatCounter) - 接口 中的方法org.apache.spark.mllib.stat.test.StreamingTestMethod
Implicit adapter to convert between streaming summary statistics type and the type required by the t-testing libraries.
toApi() - 类 中的方法org.apache.spark.status.LiveRDDDistribution
 
toApi() - 类 中的方法org.apache.spark.status.LiveStage
 
toArray() - 类 中的方法org.apache.spark.input.PortableDataStream
Read the file as a byte array
toArray() - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
toArray() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts to a dense array in column major.
toArray() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
toArray() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Converts the instance to a double array.
toArray() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
toArray() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Converts to a dense array in column major.
toArray() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
toArray() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Converts the instance to a double array.
toArrowField(String, DataType, boolean, String) - 类 中的静态方法org.apache.spark.sql.util.ArrowUtils
Maps field from Spark to Arrow.
toArrowSchema(StructType, String) - 类 中的静态方法org.apache.spark.sql.util.ArrowUtils
Maps schema from Spark to Arrow.
toArrowType(DataType, String) - 类 中的静态方法org.apache.spark.sql.util.ArrowUtils
Maps data type from Spark to Arrow.
toBatch() - 接口 中的方法org.apache.spark.sql.connector.read.Scan
Returns the physical representation of this scan for batch query.
toBigDecimal() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toBlockMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
Converts to BlockMatrix.
toBlockMatrix(int, int) - 类 中的方法org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
Converts to BlockMatrix.
toBlockMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Converts to BlockMatrix.
toBlockMatrix(int, int) - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Converts to BlockMatrix.
toBoolean(String, String) - 类 中的静态方法org.apache.spark.internal.config.ConfigHelpers
 
toBooleanArray() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
toBreeze() - 接口 中的方法org.apache.spark.mllib.linalg.distributed.DistributedMatrix
Collects data and assembles a local dense breeze matrix (for test only).
toByte() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toByteArray() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
toByteArray() - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Serializes this CountMinSketch and returns the serialized form.
toByteBuffer() - 接口 中的方法org.apache.spark.storage.BlockData
 
toByteBuffer() - 类 中的方法org.apache.spark.storage.DiskBlockData
 
toCatalystDecimal(HiveDecimalObjectInspector, Object) - 类 中的静态方法org.apache.spark.sql.hive.HiveShim
 
toChunkedByteBuffer(Function1<Object, ByteBuffer>) - 接口 中的方法org.apache.spark.storage.BlockData
 
toChunkedByteBuffer(Function1<Object, ByteBuffer>) - 类 中的方法org.apache.spark.storage.DiskBlockData
 
toColumn() - 类 中的方法org.apache.spark.sql.expressions.Aggregator
Returns this Aggregator as a TypedColumn that can be used in Dataset.
toContinuousStream(String) - 接口 中的方法org.apache.spark.sql.connector.read.Scan
Returns the physical representation of this scan for streaming query with continuous mode.
toCoordinateMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Converts to CoordinateMatrix.
toCoordinateMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Converts this matrix to a CoordinateMatrix.
toCryptoConf(SparkConf) - 类 中的静态方法org.apache.spark.security.CryptoStreamUtils
 
toDataFrame(JavaRDD<byte[]>, StructType, SparkSession) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
R callable function to create a DataFrame from a JavaRDD of serialized ArrowRecordBatches.
toDDL() - 类 中的方法org.apache.spark.sql.types.StructField
Returns a string containing a schema in DDL format.
toDDL() - 类 中的方法org.apache.spark.sql.types.StructType
Returns a string containing a schema in DDL format.
toDebugString() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
A description of this RDD and its recursive dependencies for debugging.
toDebugString() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeModel
Full description of model
toDebugString() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleModel
Full description of model
toDebugString() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
Print the full model to a string.
toDebugString() - 类 中的方法org.apache.spark.rdd.RDD
A description of this RDD and its recursive dependencies for debugging.
toDebugString() - 类 中的方法org.apache.spark.SparkConf
Return a string listing all keys and values, one per line.
toDebugString() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toDense() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts this matrix to a dense matrix while maintaining the layout of the current matrix.
toDense() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Converts this vector to a dense vector.
toDense() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
Generate a DenseMatrix from the given SparseMatrix.
toDense() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Converts this vector to a dense vector.
toDenseColMajor() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts this matrix to a dense matrix in column major order.
toDenseMatrix(boolean) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts this matrix to a dense matrix.
toDenseRowMajor() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts this matrix to a dense matrix in row major order.
toDF(String...) - 类 中的方法org.apache.spark.sql.Dataset
Converts this strongly typed collection of data to generic DataFrame with columns renamed.
toDF() - 类 中的方法org.apache.spark.sql.Dataset
Converts this strongly typed collection of data to generic Dataframe.
toDF(Seq<String>) - 类 中的方法org.apache.spark.sql.Dataset
Converts this strongly typed collection of data to generic DataFrame with columns renamed.
toDF() - 类 中的方法org.apache.spark.sql.DatasetHolder
 
toDF(Seq<String>) - 类 中的方法org.apache.spark.sql.DatasetHolder
 
toDouble(byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
toDouble(Decimal) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
toDouble() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toDouble(Decimal) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
toDouble(double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
toDouble(float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
toDouble(int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
toDouble(long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
toDouble(short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
toDoubleArray() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
toDS() - 类 中的方法org.apache.spark.sql.DatasetHolder
 
toEdgeTriplet() - 类 中的方法org.apache.spark.graphx.EdgeContext
Converts the edge and vertex properties into an EdgeTriplet for convenience.
toErrorString() - 类 中的方法org.apache.spark.ExceptionFailure
 
toErrorString() - 类 中的方法org.apache.spark.ExecutorLostFailure
 
toErrorString() - 类 中的方法org.apache.spark.FetchFailed
 
toErrorString() - 类 中的静态方法org.apache.spark.Resubmitted
 
toErrorString() - 类 中的方法org.apache.spark.TaskCommitDenied
 
toErrorString() - 接口 中的方法org.apache.spark.TaskFailedReason
Error message displayed in the web UI.
toErrorString() - 类 中的方法org.apache.spark.TaskKilled
 
toErrorString() - 类 中的静态方法org.apache.spark.TaskResultLost
 
toErrorString() - 类 中的静态方法org.apache.spark.UnknownReason
 
toFloat(byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
toFloat(Decimal) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
toFloat() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toFloat(Decimal) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
toFloat(double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
toFloat(float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
toFloat(int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
toFloat(long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
toFloat(short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
toFloatArray() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
toFormattedString() - 类 中的方法org.apache.spark.streaming.Duration
 
toIndexedRowMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Converts to IndexedRowMatrix.
toIndexedRowMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
Converts to IndexedRowMatrix.
toInputStream() - 接口 中的方法org.apache.spark.storage.BlockData
 
toInputStream() - 类 中的方法org.apache.spark.storage.DiskBlockData
 
toInspector(DataType) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
toInspector(Expression) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
Map the catalyst expression to ObjectInspector, however, if the expression is Literal or foldable, a constant writable object inspector returns; Otherwise, we always get the object inspector according to its data type(in catalyst)
toInspector(DataType) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
toInspector(Expression) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
toInt(byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
toInt(Decimal) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
toInt() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toInt(Decimal) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
toInt(double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
toInt(float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
toInt(int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
toInt(long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
toInt(short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
toInt() - 类 中的方法org.apache.spark.storage.StorageLevel
 
toIntArray() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
toJavaBigDecimal() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toJavaBigInteger() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toJavaDStream() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Convert to a JavaDStream
toJavaRDD() - 类 中的方法org.apache.spark.rdd.RDD
 
toJavaRDD() - 类 中的方法org.apache.spark.sql.Dataset
Returns the content of the Dataset as a JavaRDD of Ts.
toJson(Matrix) - 类 中的静态方法org.apache.spark.ml.linalg.JsonMatrixConverter
Coverts the Matrix to a JSON string.
toJson(Vector) - 类 中的静态方法org.apache.spark.ml.linalg.JsonVectorConverter
Coverts the vector to a JSON string.
toJson() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
toJson() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
toJson() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Converts the vector to a JSON string.
toJson() - 类 中的方法org.apache.spark.resource.ResourceInformation
 
toJSON() - 类 中的方法org.apache.spark.sql.Dataset
Returns the content of the Dataset as a Dataset of JSON strings.
toJValue() - 类 中的方法org.apache.spark.resource.ResourceInformationJson
 
TOKEN_KIND() - 类 中的静态方法org.apache.spark.kafka010.KafkaTokenUtil
 
Tokenizer - org.apache.spark.ml.feature中的类
A tokenizer that converts the input string to lowercase and then splits it by white spaces.
Tokenizer(String) - 类 的构造器org.apache.spark.ml.feature.Tokenizer
 
Tokenizer() - 类 的构造器org.apache.spark.ml.feature.Tokenizer
 
tokens() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
 
tol() - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
tol() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
tol() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
tol() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
tol() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
tol() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
tol() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
tol() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
tol() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
tol() - 接口 中的方法org.apache.spark.ml.param.shared.HasTol
Param for the convergence tolerance for iterative algorithms (&gt;= 0).
tol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
tol() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
tol() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
tol() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
tol() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
tol() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
toLocal() - 类 中的方法org.apache.spark.ml.clustering.DistributedLDAModel
Convert this distributed model to a local representation.
toLocal() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
Convert model to a local model.
toLocalIterator() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Return an iterator that contains all of the elements in this RDD.
toLocalIterator() - 类 中的方法org.apache.spark.rdd.RDD
Return an iterator that contains all of the elements in this RDD.
toLocalIterator() - 类 中的方法org.apache.spark.sql.Dataset
Returns an iterator that contains all rows in this Dataset.
toLocalMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Collect the distributed matrix on the driver as a DenseMatrix.
toLong(byte) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
toLong(Decimal) - 接口 中的方法org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
toLong() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toLong(Decimal) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
toLong(double) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
toLong(float) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
toLong(int) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
toLong(long) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
toLong(short) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
toLongArray() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
toLowercase() - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
Indicates whether to convert all characters to lowercase before tokenizing.
toMetadata(Metadata) - 类 中的方法org.apache.spark.ml.attribute.Attribute
Converts to ML metadata with some existing metadata.
toMetadata() - 类 中的方法org.apache.spark.ml.attribute.Attribute
Converts to ML metadata
toMetadata(Metadata) - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Converts to ML metadata with some existing metadata.
toMetadata() - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Converts to ML metadata
toMetadata(Metadata) - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
toMetadata() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
toMicroBatchStream(String) - 接口 中的方法org.apache.spark.sql.connector.read.Scan
Returns the physical representation of this scan for streaming query with micro-batch mode.
toNetty() - 接口 中的方法org.apache.spark.storage.BlockData
Returns a Netty-friendly wrapper for the block's data.
toNetty() - 类 中的方法org.apache.spark.storage.DiskBlockData
Returns a Netty-friendly wrapper for the block's data.
toNumber(String, Function1<String, T>, String, String) - 类 中的静态方法org.apache.spark.internal.config.ConfigHelpers
 
toOld() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeModel
Convert to spark.mllib DecisionTreeModel (losing some information)
toOld() - 接口 中的方法org.apache.spark.ml.tree.Split
Convert to old Split format
tooltip(String, String) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
ToolTips - org.apache.spark.ui.storage中的类
 
ToolTips() - 类 的构造器org.apache.spark.ui.storage.ToolTips
 
ToolTips - org.apache.spark.ui中的类
 
ToolTips() - 类 的构造器org.apache.spark.ui.ToolTips
 
toOps(T, ClassTag<VD>) - 接口 中的方法org.apache.spark.graphx.impl.VertexPartitionBaseOpsConstructor
 
top(int, Comparator<T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Returns the top k (largest) elements from this RDD as defined by the specified Comparator[T] and maintains the order.
top(int) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Returns the top k (largest) elements from this RDD using the natural ordering for T and maintains the order.
top(int, Ordering<T>) - 类 中的方法org.apache.spark.rdd.RDD
Returns the top k (largest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering.
toPairDStreamFunctions(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - 类 中的静态方法org.apache.spark.streaming.dstream.DStream
 
topByKey(int, Ordering<V>) - 类 中的方法org.apache.spark.mllib.rdd.MLPairRDDFunctions
Returns the top k (largest) elements for each key from this RDD as defined by the specified implicit Ordering[T].
topDocumentsPerTopic(int) - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
Return the top documents for each topic
topicAssignments() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
topicConcentration() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
topicConcentration() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
topicConcentration() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
topicConcentration() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
topicConcentration() - 类 中的方法org.apache.spark.mllib.clustering.LDAModel
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
topicConcentration() - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
 
topicDistribution(Vector) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
Predicts the topic mixture distribution for a document (often called "theta" in the literature).
topicDistributionCol() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
topicDistributionCol() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
topicDistributionCol() - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
Output column with estimates of the topic mixture distribution for each document (often called "theta" in the literature).
topicDistributions() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
For each document in the training set, return the distribution over topics for that document ("theta_doc").
topicDistributions(RDD<Tuple2<Object, Vector>>) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
Predicts the topic mixture distribution for each document (often called "theta" in the literature).
topicDistributions(JavaPairRDD<Long, Vector>) - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
Java-friendly version of topicDistributions
topics() - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
 
topicsMatrix() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
Inferred topics, where each topic is represented by a distribution over terms.
topicsMatrix() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
topicsMatrix() - 类 中的方法org.apache.spark.mllib.clustering.LDAModel
Inferred topics, where each topic is represented by a distribution over terms.
topicsMatrix() - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
 
topK(Iterator<Tuple2<String, Object>>, int) - 类 中的静态方法org.apache.spark.streaming.util.RawTextHelper
Gets the top k words in terms of word counts.
toPMML(StreamResult) - 接口 中的方法org.apache.spark.mllib.pmml.PMMLExportable
Export the model to the stream result in PMML format
toPMML(String) - 接口 中的方法org.apache.spark.mllib.pmml.PMMLExportable
Export the model to a local file in PMML format
toPMML(SparkContext, String) - 接口 中的方法org.apache.spark.mllib.pmml.PMMLExportable
Export the model to a directory on a distributed file system in PMML format
toPMML(OutputStream) - 接口 中的方法org.apache.spark.mllib.pmml.PMMLExportable
Export the model to the OutputStream in PMML format
toPMML() - 接口 中的方法org.apache.spark.mllib.pmml.PMMLExportable
Export the model to a String in PMML format
topNode() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
 
Topology - org.apache.spark.ml.ann中的接口
Trait for the artificial neural network (ANN) topology properties
topologyFile() - 类 中的方法org.apache.spark.storage.FileBasedTopologyMapper
 
topologyInfo() - 类 中的方法org.apache.spark.storage.BlockManagerId
 
topologyMap() - 类 中的方法org.apache.spark.storage.FileBasedTopologyMapper
 
TopologyMapper - org.apache.spark.storage中的类
::DeveloperApi:: TopologyMapper provides topology information for a given host param: conf SparkConf to get required properties, if needed
TopologyMapper(SparkConf) - 类 的构造器org.apache.spark.storage.TopologyMapper
 
TopologyModel - org.apache.spark.ml.ann中的接口
Trait for ANN topology model
toPredict() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
 
topTopicsPerDocument(int) - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
For each document, return the top k weighted topics for that document and their weights.
toRDD(JavaDoubleRDD) - 类 中的静态方法org.apache.spark.api.java.JavaDoubleRDD
 
toRDD(JavaPairRDD<K, V>) - 类 中的静态方法org.apache.spark.api.java.JavaPairRDD
 
toRDD(JavaRDD<T>) - 类 中的静态方法org.apache.spark.api.java.JavaRDD
 
toResourceInformation() - 类 中的方法org.apache.spark.resource.ResourceInformationJson
 
toRowMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
Converts to RowMatrix, dropping row indices after grouping by row index.
toRowMatrix() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Drops row indices and converts this matrix to a RowMatrix.
toScalaBigInt() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toSeq() - 类 中的方法org.apache.spark.ml.param.ParamMap
Converts this param map to a sequence of param pairs.
toSeq() - 接口 中的方法org.apache.spark.sql.Row
Return a Scala Seq representing the row.
toShort() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toShortArray() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
toSparkContext(JavaSparkContext) - 类 中的静态方法org.apache.spark.api.java.JavaSparkContext
 
toSparse() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts this matrix to a sparse matrix while maintaining the layout of the current matrix.
toSparse() - 接口 中的方法org.apache.spark.ml.linalg.Vector
Converts this vector to a sparse vector with all explicit zeros removed.
toSparse() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
Generate a SparseMatrix from the given DenseMatrix.
toSparse() - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Converts this vector to a sparse vector with all explicit zeros removed.
toSparseColMajor() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts this matrix to a sparse matrix in column major order.
toSparseMatrix(boolean) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts this matrix to a sparse matrix.
toSparseRowMajor() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Converts this matrix to a sparse matrix in row major order.
toSparseWithSize(int) - 接口 中的方法org.apache.spark.ml.linalg.Vector
Converts this vector to a sparse vector with all explicit zeros removed when the size is known.
toSparseWithSize(int) - 接口 中的方法org.apache.spark.mllib.linalg.Vector
Converts this vector to a sparse vector with all explicit zeros removed when the size is known.
toSplit() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
toSplitInfo(Class<?>, String, InputSplit) - 类 中的静态方法org.apache.spark.scheduler.SplitInfo
 
toSplitInfo(Class<?>, String, InputSplit) - 类 中的静态方法org.apache.spark.scheduler.SplitInfo
 
toString() - 类 中的方法org.apache.spark.api.java.JavaRDD
 
toString() - 类 中的方法org.apache.spark.api.java.Optional
 
toString() - 类 中的方法org.apache.spark.broadcast.Broadcast
 
toString() - 类 中的静态方法org.apache.spark.CleanAccum
 
toString() - 类 中的静态方法org.apache.spark.CleanBroadcast
 
toString() - 类 中的静态方法org.apache.spark.CleanCheckpoint
 
toString() - 类 中的静态方法org.apache.spark.CleanRDD
 
toString() - 类 中的静态方法org.apache.spark.CleanShuffle
 
toString() - 类 中的方法org.apache.spark.ContextBarrierId
 
toString() - 类 中的静态方法org.apache.spark.ExceptionFailure
 
toString() - 类 中的静态方法org.apache.spark.ExecutorLostFailure
 
toString() - 类 中的静态方法org.apache.spark.ExecutorRegistered
 
toString() - 类 中的静态方法org.apache.spark.ExecutorRemoved
 
toString() - 类 中的静态方法org.apache.spark.FetchFailed
 
toString() - 类 中的方法org.apache.spark.graphx.EdgeDirection
 
toString() - 类 中的方法org.apache.spark.graphx.EdgeTriplet
 
toString() - 类 中的方法org.apache.spark.ml.attribute.Attribute
 
toString() - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
 
toString() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
toString() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
toString() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
toString() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
toString() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
toString() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
toString() - 类 中的静态方法org.apache.spark.ml.clustering.ClusterData
 
toString() - 类 中的方法org.apache.spark.ml.feature.LabeledPoint
 
toString() - 类 中的方法org.apache.spark.ml.feature.RFormula
 
toString() - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
toString() - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
toString() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
A human readable representation of the matrix
toString(int, int) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
A human readable representation of the matrix with maximum lines and width
toString() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
toString() - 类 中的方法org.apache.spark.ml.param.Param
 
toString() - 类 中的方法org.apache.spark.ml.param.ParamMap
 
toString() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
toString() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
toString() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
 
toString() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
toString() - 类 中的静态方法org.apache.spark.ml.SaveInstanceEnd
 
toString() - 类 中的静态方法org.apache.spark.ml.SaveInstanceStart
 
toString() - 类 中的静态方法org.apache.spark.ml.TransformEnd
 
toString() - 类 中的静态方法org.apache.spark.ml.TransformStart
 
toString() - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeModel
Summary of the model
toString() - 类 中的方法org.apache.spark.ml.tree.InternalNode
 
toString() - 类 中的方法org.apache.spark.ml.tree.LeafNode
 
toString() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleModel
Summary of the model
toString() - 接口 中的方法org.apache.spark.ml.util.Identifiable
 
toString() - 类 中的静态方法org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
 
toString() - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionModel
 
toString() - 类 中的静态方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
 
toString() - 类 中的静态方法org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
 
toString() - 类 中的方法org.apache.spark.mllib.classification.SVMModel
 
toString() - 类 中的静态方法org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
 
toString() - 类 中的静态方法org.apache.spark.mllib.feature.VocabWord
 
toString() - 类 中的方法org.apache.spark.mllib.fpm.AssociationRules.Rule
 
toString() - 类 中的方法org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
 
toString() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
toString() - 类 中的静态方法org.apache.spark.mllib.linalg.distributed.IndexedRow
 
toString() - 类 中的静态方法org.apache.spark.mllib.linalg.distributed.MatrixEntry
 
toString() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
A human readable representation of the matrix
toString(int, int) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
A human readable representation of the matrix with maximum lines and width
toString() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
toString() - 类 中的静态方法org.apache.spark.mllib.recommendation.Rating
 
toString() - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearModel
Print a summary of the model.
toString() - 类 中的静态方法org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
 
toString() - 类 中的方法org.apache.spark.mllib.regression.LabeledPoint
 
toString() - 类 中的方法org.apache.spark.mllib.stat.test.BinarySample
 
toString() - 类 中的方法org.apache.spark.mllib.stat.test.ChiSqTestResult
 
toString() - 类 中的方法org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
 
toString() - 接口 中的方法org.apache.spark.mllib.stat.test.TestResult
String explaining the hypothesis test result.
toString() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.Algo
 
toString() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
toString() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.FeatureType
 
toString() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
toString() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel
Print a summary of the model.
toString() - 类 中的方法org.apache.spark.mllib.tree.model.InformationGainStats
 
toString() - 类 中的方法org.apache.spark.mllib.tree.model.Node
 
toString() - 类 中的方法org.apache.spark.mllib.tree.model.Predict
 
toString() - 类 中的方法org.apache.spark.mllib.tree.model.Split
 
toString() - 类 中的方法org.apache.spark.partial.BoundedDouble
 
toString() - 类 中的方法org.apache.spark.partial.PartialResult
 
toString() - 类 中的静态方法org.apache.spark.rdd.CheckpointState
 
toString() - 类 中的静态方法org.apache.spark.rdd.DeterministicLevel
 
toString() - 类 中的方法org.apache.spark.rdd.RDD
 
toString() - 类 中的方法org.apache.spark.resource.ResourceInformation
 
toString() - 类 中的静态方法org.apache.spark.resource.ResourceInformationJson
 
toString() - 类 中的静态方法org.apache.spark.scheduler.AccumulableInfo
 
toString() - 类 中的静态方法org.apache.spark.scheduler.AskPermissionToCommitOutput
 
toString() - 类 中的静态方法org.apache.spark.scheduler.BlacklistedExecutor
 
toString() - 类 中的静态方法org.apache.spark.scheduler.ExecutorKilled
 
toString() - 类 中的方法org.apache.spark.scheduler.InputFormatInfo
 
toString() - 类 中的静态方法org.apache.spark.scheduler.local.KillTask
 
toString() - 类 中的静态方法org.apache.spark.scheduler.local.ReviveOffers
 
toString() - 类 中的静态方法org.apache.spark.scheduler.local.StatusUpdate
 
toString() - 类 中的静态方法org.apache.spark.scheduler.local.StopExecutor
 
toString() - 类 中的静态方法org.apache.spark.scheduler.LossReasonPending
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SchedulingMode
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerApplicationEnd
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerApplicationStart
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerBlockUpdated
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorAdded
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorRemoved
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerJobEnd
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerJobStart
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerLogStart
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerNodeBlacklisted
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerStageCompleted
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerStageSubmitted
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerTaskEnd
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerTaskGettingResult
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerTaskStart
 
toString() - 类 中的静态方法org.apache.spark.scheduler.SparkListenerUnpersistRDD
 
toString() - 类 中的方法org.apache.spark.scheduler.SplitInfo
 
toString() - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
toString() - 类 中的方法org.apache.spark.SerializableWritable
 
toString() - 类 中的方法org.apache.spark.sql.catalog.Column
 
toString() - 类 中的方法org.apache.spark.sql.catalog.Database
 
toString() - 类 中的方法org.apache.spark.sql.catalog.Function
 
toString() - 类 中的方法org.apache.spark.sql.catalog.Table
 
toString() - 类 中的方法org.apache.spark.sql.Column
 
toString() - 类 中的方法org.apache.spark.sql.connector.read.streaming.Offset
 
toString() - 类 中的方法org.apache.spark.sql.Dataset
 
toString() - 类 中的静态方法org.apache.spark.sql.dynamicpruning.PlanDynamicPruningFilters
 
toString() - 类 中的静态方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
toString() - 类 中的静态方法org.apache.spark.sql.hive.execution.InsertIntoHiveDirCommand
 
toString() - 类 中的静态方法org.apache.spark.sql.hive.execution.InsertIntoHiveTable
 
toString() - 类 中的静态方法org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 
toString() - 类 中的静态方法org.apache.spark.sql.hive.execution.ScriptTransformationExec
 
toString() - 类 中的静态方法org.apache.spark.sql.hive.HiveUDAFBuffer
 
toString() - 类 中的方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
toString() - 类 中的静态方法org.apache.spark.sql.hive.RelationConversions
 
toString() - 类 中的静态方法org.apache.spark.sql.jdbc.JdbcType
 
toString() - 类 中的方法org.apache.spark.sql.KeyValueGroupedDataset
 
toString() - 接口 中的方法org.apache.spark.sql.RelationalGroupedDataset.GroupType
 
toString() - 类 中的方法org.apache.spark.sql.RelationalGroupedDataset
 
toString() - 接口 中的方法org.apache.spark.sql.Row
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysFalse
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.AlwaysTrue
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.And
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.EqualNullSafe
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.EqualTo
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.GreaterThan
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.GreaterThanOrEqual
 
toString() - 类 中的方法org.apache.spark.sql.sources.In
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.IsNotNull
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.IsNull
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.LessThan
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.LessThanOrEqual
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.Not
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.Or
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.StringContains
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.StringEndsWith
 
toString() - 类 中的静态方法org.apache.spark.sql.sources.StringStartsWith
 
toString() - 类 中的方法org.apache.spark.sql.streaming.SinkProgress
 
toString() - 类 中的方法org.apache.spark.sql.streaming.SourceProgress
 
toString() - 类 中的方法org.apache.spark.sql.streaming.StateOperatorProgress
 
toString() - 异常错误 中的方法org.apache.spark.sql.streaming.StreamingQueryException
 
toString() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryProgress
 
toString() - 类 中的方法org.apache.spark.sql.streaming.StreamingQueryStatus
 
toString() - 类 中的静态方法org.apache.spark.sql.types.CharType
 
toString() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toString() - 类 中的方法org.apache.spark.sql.types.DecimalType
 
toString() - 类 中的方法org.apache.spark.sql.types.Metadata
 
toString() - 类 中的方法org.apache.spark.sql.types.StructField
 
toString() - 类 中的静态方法org.apache.spark.sql.types.VarcharType
 
toString() - 类 中的静态方法org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
toString() - 类 中的静态方法org.apache.spark.status.api.v1.ApplicationInfo
 
toString() - 类 中的方法org.apache.spark.status.api.v1.StackTrace
 
toString() - 类 中的静态方法org.apache.spark.status.api.v1.ThreadStackTrace
 
toString() - 类 中的方法org.apache.spark.storage.BlockId
 
toString() - 类 中的方法org.apache.spark.storage.BlockManagerId
 
toString() - 类 中的静态方法org.apache.spark.storage.BroadcastBlockId
 
toString() - 类 中的静态方法org.apache.spark.storage.RDDBlockId
 
toString() - 类 中的方法org.apache.spark.storage.RDDInfo
 
toString() - 类 中的静态方法org.apache.spark.storage.ShuffleBlockBatchId
 
toString() - 类 中的静态方法org.apache.spark.storage.ShuffleBlockId
 
toString() - 类 中的静态方法org.apache.spark.storage.ShuffleDataBlockId
 
toString() - 类 中的静态方法org.apache.spark.storage.ShuffleIndexBlockId
 
toString() - 类 中的方法org.apache.spark.storage.StorageLevel
 
toString() - 类 中的静态方法org.apache.spark.storage.StreamBlockId
 
toString() - 类 中的静态方法org.apache.spark.storage.TaskResultBlockId
 
toString() - 类 中的方法org.apache.spark.streaming.Duration
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.BatchInfo
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.OutputOperationInfo
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverInfo
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverState
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
 
toString() - 类 中的静态方法org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
 
toString() - 类 中的方法org.apache.spark.streaming.State
 
toString() - 类 中的方法org.apache.spark.streaming.Time
 
toString() - 类 中的静态方法org.apache.spark.TaskCommitDenied
 
toString() - 类 中的静态方法org.apache.spark.TaskKilled
 
toString() - 类 中的静态方法org.apache.spark.TaskState
 
toString() - 类 中的方法org.apache.spark.util.AccumulatorV2
 
toString() - 类 中的方法org.apache.spark.util.MutablePair
 
toString() - 类 中的方法org.apache.spark.util.StatCounter
 
toStructField(Metadata) - 类 中的方法org.apache.spark.ml.attribute.Attribute
Converts to a StructField with some existing metadata.
toStructField() - 类 中的方法org.apache.spark.ml.attribute.Attribute
Converts to a StructField.
toStructField(Metadata) - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Converts to a StructField with some existing metadata.
toStructField() - 类 中的方法org.apache.spark.ml.attribute.AttributeGroup
Converts to a StructField.
toStructField(Metadata) - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
toStructField() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
totalBlocksFetched() - 类 中的方法org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
 
totalBytesRead(ShuffleReadMetrics) - 类 中的静态方法org.apache.spark.ui.jobs.ApiHelper
 
totalCores() - 类 中的方法org.apache.spark.scheduler.cluster.ExecutorInfo
 
totalCores() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
totalCores() - 类 中的方法org.apache.spark.status.LiveExecutor
 
totalCount() - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Total count of items added to this CountMinSketch so far.
totalDelay() - 类 中的方法org.apache.spark.status.api.v1.streaming.BatchInfo
 
totalDelay() - 类 中的方法org.apache.spark.streaming.scheduler.BatchInfo
Time taken for all the jobs of this batch to finish processing from the time they were submitted.
totalDiskSize() - 类 中的方法org.apache.spark.ui.storage.ExecutorStreamSummary
 
totalDuration() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
totalDuration() - 类 中的方法org.apache.spark.status.LiveExecutor
 
totalGCTime() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
totalGcTime() - 类 中的方法org.apache.spark.status.LiveExecutor
 
totalInputBytes() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
totalInputBytes() - 类 中的方法org.apache.spark.status.LiveExecutor
 
totalIterations() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionTrainingSummary
Number of training iterations.
totalIterations() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionTrainingSummary
Number of training iterations until termination This value is only available when using the "l-bfgs" solver.
totalMemSize() - 类 中的方法org.apache.spark.ui.storage.ExecutorStreamSummary
 
totalNumNodes() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
totalNumNodes() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
totalNumNodes() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
totalNumNodes() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
totalNumNodes() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleModel
Total number of nodes, summed over all trees in the ensemble.
totalOffHeap() - 类 中的方法org.apache.spark.status.LiveExecutor
 
totalOffHeapStorageMemory() - 接口 中的方法org.apache.spark.SparkExecutorInfo
 
totalOffHeapStorageMemory() - 类 中的方法org.apache.spark.SparkExecutorInfoImpl
 
totalOffHeapStorageMemory() - 类 中的方法org.apache.spark.status.api.v1.MemoryMetrics
 
totalOnHeap() - 类 中的方法org.apache.spark.status.LiveExecutor
 
totalOnHeapStorageMemory() - 接口 中的方法org.apache.spark.SparkExecutorInfo
 
totalOnHeapStorageMemory() - 类 中的方法org.apache.spark.SparkExecutorInfoImpl
 
totalOnHeapStorageMemory() - 类 中的方法org.apache.spark.status.api.v1.MemoryMetrics
 
totalShuffleRead() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
totalShuffleRead() - 类 中的方法org.apache.spark.status.LiveExecutor
 
totalShuffleWrite() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
totalShuffleWrite() - 类 中的方法org.apache.spark.status.LiveExecutor
 
totalTasks() - 类 中的方法org.apache.spark.status.api.v1.ExecutorSummary
 
totalTasks() - 类 中的方法org.apache.spark.status.LiveExecutor
 
toTuple() - 类 中的方法org.apache.spark.graphx.EdgeTriplet
 
toTypeInfo() - 类 中的方法org.apache.spark.sql.hive.HiveInspectors.typeInfoConversions
 
toUnscaledLong() - 类 中的方法org.apache.spark.sql.types.Decimal
 
toVirtualHosts(Seq<String>) - 类 中的静态方法org.apache.spark.ui.JettyUtils
 
train(RDD<ALS.Rating<ID>>, int, int, int, int, double, boolean, double, boolean, StorageLevel, StorageLevel, int, long, ClassTag<ID>, Ordering<ID>) - 类 中的静态方法org.apache.spark.ml.recommendation.ALS
:: DeveloperApi :: Implementation of the ALS algorithm.
train(RDD<LabeledPoint>) - 类 中的静态方法org.apache.spark.mllib.classification.NaiveBayes
Trains a Naive Bayes model given an RDD of (label, features) pairs.
train(RDD<LabeledPoint>, double) - 类 中的静态方法org.apache.spark.mllib.classification.NaiveBayes
Trains a Naive Bayes model given an RDD of (label, features) pairs.
train(RDD<LabeledPoint>, double, String) - 类 中的静态方法org.apache.spark.mllib.classification.NaiveBayes
Trains a Naive Bayes model given an RDD of (label, features) pairs.
train(RDD<LabeledPoint>, int, double, double, double, Vector) - 类 中的静态方法org.apache.spark.mllib.classification.SVMWithSGD
Train a SVM model given an RDD of (label, features) pairs.
train(RDD<LabeledPoint>, int, double, double, double) - 类 中的静态方法org.apache.spark.mllib.classification.SVMWithSGD
Train a SVM model given an RDD of (label, features) pairs.
train(RDD<LabeledPoint>, int, double, double) - 类 中的静态方法org.apache.spark.mllib.classification.SVMWithSGD
Train a SVM model given an RDD of (label, features) pairs.
train(RDD<LabeledPoint>, int) - 类 中的静态方法org.apache.spark.mllib.classification.SVMWithSGD
Train a SVM model given an RDD of (label, features) pairs.
train(RDD<Vector>, int, int, String, long) - 类 中的静态方法org.apache.spark.mllib.clustering.KMeans
Trains a k-means model using the given set of parameters.
train(RDD<Vector>, int, int, String) - 类 中的静态方法org.apache.spark.mllib.clustering.KMeans
Trains a k-means model using the given set of parameters.
train(RDD<Vector>, int, int) - 类 中的静态方法org.apache.spark.mllib.clustering.KMeans
Trains a k-means model using specified parameters and the default values for unspecified.
train(RDD<Rating>, int, int, double, int, long) - 类 中的静态方法org.apache.spark.mllib.recommendation.ALS
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
train(RDD<Rating>, int, int, double, int) - 类 中的静态方法org.apache.spark.mllib.recommendation.ALS
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
train(RDD<Rating>, int, int, double) - 类 中的静态方法org.apache.spark.mllib.recommendation.ALS
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
train(RDD<Rating>, int, int) - 类 中的静态方法org.apache.spark.mllib.recommendation.ALS
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
train(RDD<LabeledPoint>, Strategy) - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
Method to train a decision tree model.
train(RDD<LabeledPoint>, Enumeration.Value, Impurity, int) - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
Method to train a decision tree model.
train(RDD<LabeledPoint>, Enumeration.Value, Impurity, int, int) - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
Method to train a decision tree model.
train(RDD<LabeledPoint>, Enumeration.Value, Impurity, int, int, int, Enumeration.Value, Map<Object, Object>) - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
Method to train a decision tree model.
train(RDD<LabeledPoint>, BoostingStrategy) - 类 中的静态方法org.apache.spark.mllib.tree.GradientBoostedTrees
Method to train a gradient boosting model.
train(JavaRDD<LabeledPoint>, BoostingStrategy) - 类 中的静态方法org.apache.spark.mllib.tree.GradientBoostedTrees
Java-friendly API for org.apache.spark.mllib.tree.GradientBoostedTrees.train
trainClassifier(RDD<LabeledPoint>, int, Map<Object, Object>, String, int, int) - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
Method to train a decision tree model for binary or multiclass classification.
trainClassifier(JavaRDD<LabeledPoint>, int, Map<Integer, Integer>, String, int, int) - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
Java-friendly API for org.apache.spark.mllib.tree.DecisionTree.trainClassifier
trainClassifier(RDD<LabeledPoint>, Strategy, int, String, int) - 类 中的静态方法org.apache.spark.mllib.tree.RandomForest
Method to train a decision tree model for binary or multiclass classification.
trainClassifier(RDD<LabeledPoint>, int, Map<Object, Object>, int, String, String, int, int, int) - 类 中的静态方法org.apache.spark.mllib.tree.RandomForest
Method to train a decision tree model for binary or multiclass classification.
trainClassifier(JavaRDD<LabeledPoint>, int, Map<Integer, Integer>, int, String, String, int, int, int) - 类 中的静态方法org.apache.spark.mllib.tree.RandomForest
Java-friendly API for org.apache.spark.mllib.tree.RandomForest.trainClassifier
trainImplicit(RDD<Rating>, int, int, double, int, double, long) - 类 中的静态方法org.apache.spark.mllib.recommendation.ALS
Train a matrix factorization model given an RDD of 'implicit preferences' given by users to some products, in the form of (userID, productID, preference) pairs.
trainImplicit(RDD<Rating>, int, int, double, int, double) - 类 中的静态方法org.apache.spark.mllib.recommendation.ALS
Train a matrix factorization model given an RDD of 'implicit preferences' of users for a subset of products.
trainImplicit(RDD<Rating>, int, int, double, double) - 类 中的静态方法org.apache.spark.mllib.recommendation.ALS
Train a matrix factorization model given an RDD of 'implicit preferences' of users for a subset of products.
trainImplicit(RDD<Rating>, int, int) - 类 中的静态方法org.apache.spark.mllib.recommendation.ALS
Train a matrix factorization model given an RDD of 'implicit preferences' of users for a subset of products.
trainingCost() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansSummary
 
trainingCost() - 类 中的方法org.apache.spark.ml.clustering.KMeansSummary
 
trainingCost() - 类 中的方法org.apache.spark.mllib.clustering.BisectingKMeansModel
 
trainingCost() - 类 中的方法org.apache.spark.mllib.clustering.KMeansModel
 
trainingLogLikelihood() - 类 中的方法org.apache.spark.ml.clustering.DistributedLDAModel
 
trainingSummary() - 接口 中的方法org.apache.spark.ml.util.HasTrainingSummary
 
trainOn(DStream<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Update the clustering model by training on batches of data from a DStream.
trainOn(JavaDStream<Vector>) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeans
Java-friendly version of trainOn.
trainOn(DStream<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearAlgorithm
Update the model by training on batches of data from a DStream.
trainOn(JavaDStream<LabeledPoint>) - 类 中的方法org.apache.spark.mllib.regression.StreamingLinearAlgorithm
Java-friendly version of trainOn.
trainRatio() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
trainRatio() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
trainRatio() - 接口 中的方法org.apache.spark.ml.tuning.TrainValidationSplitParams
Param for ratio between train and validation data.
trainRegressor(RDD<LabeledPoint>, Map<Object, Object>, String, int, int) - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
Method to train a decision tree model for regression.
trainRegressor(JavaRDD<LabeledPoint>, Map<Integer, Integer>, String, int, int) - 类 中的静态方法org.apache.spark.mllib.tree.DecisionTree
Java-friendly API for org.apache.spark.mllib.tree.DecisionTree.trainRegressor
trainRegressor(RDD<LabeledPoint>, Strategy, int, String, int) - 类 中的静态方法org.apache.spark.mllib.tree.RandomForest
Method to train a decision tree model for regression.
trainRegressor(RDD<LabeledPoint>, Map<Object, Object>, int, String, String, int, int, int) - 类 中的静态方法org.apache.spark.mllib.tree.RandomForest
Method to train a decision tree model for regression.
trainRegressor(JavaRDD<LabeledPoint>, Map<Integer, Integer>, int, String, String, int, int, int) - 类 中的静态方法org.apache.spark.mllib.tree.RandomForest
Java-friendly API for org.apache.spark.mllib.tree.RandomForest.trainRegressor
TrainValidationSplit - org.apache.spark.ml.tuning中的类
Validation for hyper-parameter tuning.
TrainValidationSplit(String) - 类 的构造器org.apache.spark.ml.tuning.TrainValidationSplit
 
TrainValidationSplit() - 类 的构造器org.apache.spark.ml.tuning.TrainValidationSplit
 
TrainValidationSplitModel - org.apache.spark.ml.tuning中的类
Model from train validation split.
TrainValidationSplitModel.TrainValidationSplitModelWriter - org.apache.spark.ml.tuning中的类
Writer for TrainValidationSplitModel.
TrainValidationSplitParams - org.apache.spark.ml.tuning中的接口
transferMapSpillFile(File, long[]) - 接口 中的方法org.apache.spark.shuffle.api.SingleSpillShuffleMapOutputWriter
Transfer a file that contains the bytes of all the partitions written by this map task.
transferred() - 类 中的方法org.apache.spark.storage.ReadableChannelFileRegion
 
transferTo(WritableByteChannel, long) - 类 中的方法org.apache.spark.storage.ReadableChannelFileRegion
 
transform(Function1<Try<T>, Try<S>>, ExecutionContext) - 类 中的方法org.apache.spark.ComplexFutureAction
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.ClassificationModel
Transforms dataset by reading from featuresCol, and appending new columns as specified by parameters: - predicted labels as predictionCol of type Double - raw predictions (confidences) as rawPredictionCol of type Vector.
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.ProbabilisticClassificationModel
Transforms dataset by reading from featuresCol, and appending new columns as specified by parameters: - predicted labels as predictionCol of type Double - raw predictions (confidences) as rawPredictionCol of type Vector - probability of each class as probabilityCol of type Vector.
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.clustering.LDAModel
Transforms the input dataset.
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.ColumnPruner
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.Interaction
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.PCAModel
Transform a vector by computed Principal Components.
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.SQLTransformer
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.VectorAttributeRewriter
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
Transform a sentence column to a vector column to represent the whole sentence.
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
The transform method first generates the association rules according to the frequent itemsets.
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.PipelineModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.PredictionModel
Transforms dataset by reading from featuresCol, calling predict, and storing the predictions as a new column predictionCol.
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - 类 中的方法org.apache.spark.ml.Transformer
Transforms the dataset with optional parameters
transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - 类 中的方法org.apache.spark.ml.Transformer
Transforms the dataset with optional parameters
transform(Dataset<?>, ParamMap) - 类 中的方法org.apache.spark.ml.Transformer
Transforms the dataset with provided parameter map as additional parameters.
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.Transformer
Transforms the input dataset.
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
transform(Dataset<?>) - 类 中的方法org.apache.spark.ml.UnaryTransformer
 
transform(Vector) - 类 中的方法org.apache.spark.mllib.feature.ChiSqSelectorModel
Applies transformation on a vector.
transform(Vector) - 类 中的方法org.apache.spark.mllib.feature.ElementwiseProduct
Does the hadamard product transformation.
transform(Iterable<?>) - 类 中的方法org.apache.spark.mllib.feature.HashingTF
Transforms the input document into a sparse term frequency vector.
transform(Iterable<?>) - 类 中的方法org.apache.spark.mllib.feature.HashingTF
Transforms the input document into a sparse term frequency vector (Java version).
transform(RDD<D>) - 类 中的方法org.apache.spark.mllib.feature.HashingTF
Transforms the input document to term frequency vectors.
transform(JavaRDD<D>) - 类 中的方法org.apache.spark.mllib.feature.HashingTF
Transforms the input document to term frequency vectors (Java version).
transform(RDD<Vector>) - 类 中的方法org.apache.spark.mllib.feature.IDFModel
Transforms term frequency (TF) vectors to TF-IDF vectors.
transform(Vector) - 类 中的方法org.apache.spark.mllib.feature.IDFModel
Transforms a term frequency (TF) vector to a TF-IDF vector
transform(JavaRDD<Vector>) - 类 中的方法org.apache.spark.mllib.feature.IDFModel
Transforms term frequency (TF) vectors to TF-IDF vectors (Java version).
transform(Vector) - 类 中的方法org.apache.spark.mllib.feature.Normalizer
Applies unit length normalization on a vector.
transform(Vector) - 类 中的方法org.apache.spark.mllib.feature.PCAModel
Transform a vector by computed Principal Components.
transform(Vector) - 类 中的方法org.apache.spark.mllib.feature.StandardScalerModel
Applies standardization transformation on a vector.
transform(Vector) - 接口 中的方法org.apache.spark.mllib.feature.VectorTransformer
Applies transformation on a vector.
transform(RDD<Vector>) - 接口 中的方法org.apache.spark.mllib.feature.VectorTransformer
Applies transformation on an RDD[Vector].
transform(JavaRDD<Vector>) - 接口 中的方法org.apache.spark.mllib.feature.VectorTransformer
Applies transformation on a JavaRDD[Vector].
transform(String) - 类 中的方法org.apache.spark.mllib.feature.Word2VecModel
Transforms a word to its vector representation
transform(Function1<Try<T>, Try<S>>, ExecutionContext) - 类 中的方法org.apache.spark.SimpleFutureAction
 
Transform - org.apache.spark.sql.connector.expressions中的接口
Represents a transform function in the public logical expression API.
transform(Function1<Dataset<T>, Dataset<U>>) - 类 中的方法org.apache.spark.sql.Dataset
Concise syntax for chaining custom transformations.
transform(Column, Function1<Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns an array of elements after applying a tranformation to each element in the input array.
transform(Column, Function2<Column, Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Returns an array of elements after applying a tranformation to each element in the input array.
transform(Function<R, JavaRDD<U>>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
transform(Function2<R, Time, JavaRDD<U>>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
transform(List<JavaDStream<?>>, Function2<List<JavaRDD<?>>, Time, JavaRDD<T>>) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams.
transform(Function1<RDD<T>, RDD<U>>, ClassTag<U>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
transform(Function2<RDD<T>, Time, RDD<U>>, ClassTag<U>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
transform(Seq<DStream<?>>, Function2<Seq<RDD<?>>, Time, RDD<T>>, ClassTag<T>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams.
transform_keys(Column, Function2<Column, Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Applies a function to every key-value pair in a map and returns a map with the results of those applications as the new keys for the pairs.
transform_values(Column, Function2<Column, Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Applies a function to every key-value pair in a map and returns a map with the results of those applications as the new values for the pairs.
TransformEnd - org.apache.spark.ml中的类
Event fired after Transformer.transform.
TransformEnd() - 类 的构造器org.apache.spark.ml.TransformEnd
 
transformer() - 类 中的方法org.apache.spark.ml.TransformEnd
 
Transformer - org.apache.spark.ml中的类
:: DeveloperApi :: Abstract class for transformers that transform one dataset into another.
Transformer() - 类 的构造器org.apache.spark.ml.Transformer
 
transformer() - 类 中的方法org.apache.spark.ml.TransformStart
 
TransformHelper(Seq<Transform>) - 类 的构造器org.apache.spark.sql.connector.catalog.CatalogV2Implicits.TransformHelper
 
transformImpl(Dataset<?>) - 类 中的方法org.apache.spark.ml.classification.ClassificationModel
 
transformOutputColumnSchema(StructField, String, boolean, boolean) - 类 中的静态方法org.apache.spark.ml.feature.OneHotEncoderCommon
Prepares the StructField with proper metadata for OneHotEncoder's output column.
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.clustering.LDA
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.ColumnPruner
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.IDF
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.Imputer
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.Interaction
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScaler
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.MinHashLSH
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.PCA
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.RFormula
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.SQLTransformer
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.VectorAttributeRewriter
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.Pipeline
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.PipelineModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.PipelineStage
:: DeveloperApi :: Check transform validity and derive the output schema from the input schema.
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.PredictionModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.Predictor
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
transformSchema(StructType) - 类 中的方法org.apache.spark.ml.UnaryTransformer
 
transformSchemaImpl(StructType) - 接口 中的方法org.apache.spark.ml.tuning.ValidatorParams
 
TransformStart - org.apache.spark.ml中的类
Event fired before Transformer.transform.
TransformStart() - 类 的构造器org.apache.spark.ml.TransformStart
 
transformToPair(Function<R, JavaPairRDD<K2, V2>>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
transformToPair(Function2<R, Time, JavaPairRDD<K2, V2>>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
transformToPair(List<JavaDStream<?>>, Function2<List<JavaRDD<?>>, Time, JavaPairRDD<K, V>>) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams.
transformWith(Function1<Try<T>, Future<S>>, ExecutionContext) - 类 中的方法org.apache.spark.ComplexFutureAction
 
transformWith(Function1<Try<T>, Future<S>>, ExecutionContext) - 类 中的方法org.apache.spark.SimpleFutureAction
 
transformWith(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaRDD<W>>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
transformWith(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaRDD<W>>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
transformWith(DStream<U>, Function2<RDD<T>, RDD<U>, RDD<V>>, ClassTag<U>, ClassTag<V>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
transformWith(DStream<U>, Function3<RDD<T>, RDD<U>, Time, RDD<V>>, ClassTag<U>, ClassTag<V>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
transformWithToPair(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaPairRDD<K2, V2>>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
transformWithToPair(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaPairRDD<K3, V3>>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
translate(Column, String, String) - 类 中的静态方法org.apache.spark.sql.functions
Translate any character in the src by a character in replaceString.
transpose() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
transpose() - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Transpose the Matrix.
transpose() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
transpose() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
transpose() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Transpose this BlockMatrix.
transpose() - 类 中的方法org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
Transposes this CoordinateMatrix.
transpose() - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Transpose the Matrix.
transpose() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Aggregates the elements of this RDD in a multi-level tree pattern.
treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
org.apache.spark.api.java.JavaRDDLike.treeAggregate with suggested depth 2.
treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.RDD
Aggregates the elements of this RDD in a multi-level tree pattern.
TreeClassifierParams - org.apache.spark.ml.tree中的接口
Parameters for Decision Tree-based classification algorithms.
TreeEnsembleClassifierParams - org.apache.spark.ml.tree中的接口
Parameters for Decision Tree-based ensemble classification algorithms.
TreeEnsembleModel<M extends DecisionTreeModel> - org.apache.spark.ml.tree中的接口
Abstraction for models which are ensembles of decision trees
TreeEnsembleParams - org.apache.spark.ml.tree中的接口
Parameters for Decision Tree-based ensemble algorithms.
TreeEnsembleRegressorParams - org.apache.spark.ml.tree中的接口
Parameters for Decision Tree-based ensemble regression algorithms.
treeID() - 类 中的方法org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
 
treeId() - 类 中的方法org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
treeReduce(Function2<T, T, T>, int) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Reduces the elements of this RDD in a multi-level tree pattern.
treeReduce(Function2<T, T, T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
org.apache.spark.api.java.JavaRDDLike.treeReduce with suggested depth 2.
treeReduce(Function2<T, T, T>, int) - 类 中的方法org.apache.spark.rdd.RDD
Reduces the elements of this RDD in a multi-level tree pattern.
TreeRegressorParams - org.apache.spark.ml.tree中的接口
Parameters for Decision Tree-based regression algorithms.
trees() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
trees() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
trees() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
trees() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
trees() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleModel
Trees in this ensemble.
trees() - 类 中的方法org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
 
trees() - 类 中的方法org.apache.spark.mllib.tree.model.RandomForestModel
 
treeStrategy() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
treeString() - 类 中的方法org.apache.spark.sql.types.StructType
 
treeString(int) - 类 中的方法org.apache.spark.sql.types.StructType
 
treeWeights() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
treeWeights() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
treeWeights() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
treeWeights() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
treeWeights() - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleModel
Weights for each tree, zippable with trees
treeWeights() - 类 中的方法org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
 
triangleCount() - 类 中的方法org.apache.spark.graphx.GraphOps
Compute the number of triangles passing through each vertex.
TriangleCount - org.apache.spark.graphx.lib中的类
Compute the number of triangles passing through each vertex.
TriangleCount() - 类 的构造器org.apache.spark.graphx.lib.TriangleCount
 
trigger(Trigger) - 类 中的方法org.apache.spark.sql.streaming.DataStreamWriter
Set the trigger for the stream query.
Trigger - org.apache.spark.sql.streaming中的类
Policy used to indicate how often results should be produced by a [[StreamingQuery]].
Trigger() - 类 的构造器org.apache.spark.sql.streaming.Trigger
 
TriggerThreadDump$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump$
 
trim(Column) - 类 中的静态方法org.apache.spark.sql.functions
Trim the spaces from both ends for the specified string column.
trim(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Trim the specified character from both ends for the specified string column.
TrimHorizon() - 类 的构造器org.apache.spark.streaming.kinesis.KinesisInitialPositions.TrimHorizon
 
TripletFields - org.apache.spark.graphx中的类
Represents a subset of the fields of an [[EdgeTriplet]] or [[EdgeContext]].
TripletFields() - 类 的构造器org.apache.spark.graphx.TripletFields
Constructs a default TripletFields in which all fields are included.
TripletFields(boolean, boolean, boolean) - 类 的构造器org.apache.spark.graphx.TripletFields
 
triplets() - 类 中的方法org.apache.spark.graphx.Graph
An RDD containing the edge triplets, which are edges along with the vertex data associated with the adjacent vertices.
triplets() - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
truePositiveRate(double) - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns true positive rate for a given label (category)
truePositiveRateByLabel() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns true positive rate for each label (category).
trunc(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Returns date truncated to the unit specified by the format.
truncate() - 接口 中的方法org.apache.spark.sql.connector.write.SupportsOverwrite
 
truncate() - 接口 中的方法org.apache.spark.sql.connector.write.SupportsTruncate
Configures a write to replace all existing data with data committed in the write.
tryCompare(T, T) - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
tryCompare(T, T) - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
tryCompare(T, T) - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
tryCompare(T, T) - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
tryCompare(T, T) - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
tryCompare(T, T) - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
tryCompare(T, T) - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
tryLog(Function0<T>) - 类 中的静态方法org.apache.spark.util.Utils
Executes the given block in a Try, logging any uncaught exceptions.
tryLogNonFatalError(Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.util.Utils
Executes the given block.
tryOrExit(Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.util.Utils
Execute a block of code that evaluates to Unit, forwarding any uncaught exceptions to the default UncaughtExceptionHandler NOTE: This method is to be called by the spark-started JVM process.
tryOrIOException(Function0<T>) - 类 中的静态方法org.apache.spark.util.Utils
Execute a block of code that returns a value, re-throwing any non-fatal uncaught exceptions as IOException.
tryOrStopSparkContext(SparkContext, Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.util.Utils
Execute a block of code that evaluates to Unit, stop SparkContext if there is any uncaught exception NOTE: This method is to be called by the driver-side components to avoid stopping the user-started JVM process completely; in contrast, tryOrExit is to be called in the spark-started JVM process .
tryRecoverFromCheckpoint(String) - 类 中的方法org.apache.spark.streaming.StreamingContextPythonHelper
This is a private method only for Python to implement getOrCreate.
tryWithResource(Function0<R>, Function1<R, T>) - 类 中的静态方法org.apache.spark.util.Utils
 
tryWithSafeFinally(Function0<T>, Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.util.Utils
Execute a block of code, then a finally block, but if exceptions happen in the finally block, do not suppress the original exception.
tryWithSafeFinallyAndFailureCallbacks(Function0<T>, Function0<BoxedUnit>, Function0<BoxedUnit>) - 类 中的静态方法org.apache.spark.util.Utils
Execute a block of code and call the failure callbacks in the catch block.
tuple(Encoder<T1>, Encoder<T2>) - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for 2-ary tuples.
tuple(Encoder<T1>, Encoder<T2>, Encoder<T3>) - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for 3-ary tuples.
tuple(Encoder<T1>, Encoder<T2>, Encoder<T3>, Encoder<T4>) - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for 4-ary tuples.
tuple(Encoder<T1>, Encoder<T2>, Encoder<T3>, Encoder<T4>, Encoder<T5>) - 类 中的静态方法org.apache.spark.sql.Encoders
An encoder for 5-ary tuples.
tValues() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
 
tValues() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionSummary
 
Tweedie$() - 类 的构造器org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie$
 
TYPE() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
typed - org.apache.spark.sql.expressions.javalang中的类
已过时。
As of release 3.0.0, please use the untyped builtin aggregate functions.
typed() - 类 的构造器org.apache.spark.sql.expressions.javalang.typed
已过时。
 
typed - org.apache.spark.sql.expressions.scalalang中的类
已过时。
please use untyped builtin aggregate functions. Since 3.0.0.
typed() - 类 的构造器org.apache.spark.sql.expressions.scalalang.typed
已过时。
 
TypedColumn<T,U> - org.apache.spark.sql中的类
A Column where an Encoder has been given for the expected input and return type.
TypedColumn(Expression, ExpressionEncoder<U>) - 类 的构造器org.apache.spark.sql.TypedColumn
 
typedLit(T, TypeTags.TypeTag<T>) - 类 中的静态方法org.apache.spark.sql.functions
Creates a Column of literal value.
typeInfoConversions(DataType) - 类 的构造器org.apache.spark.sql.hive.HiveInspectors.typeInfoConversions
 
typeInfoConversions(DataType) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
typeName() - 类 中的方法org.apache.spark.mllib.linalg.VectorUDT
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.BinaryType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.BooleanType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.ByteType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.CalendarIntervalType
 
typeName() - 类 中的方法org.apache.spark.sql.types.DataType
Name of the type used in JSON serialization.
typeName() - 类 中的静态方法org.apache.spark.sql.types.DateType
 
typeName() - 类 中的方法org.apache.spark.sql.types.DecimalType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.DoubleType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.FloatType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.IntegerType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.LongType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.NullType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.ShortType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.StringType
 
typeName() - 类 中的静态方法org.apache.spark.sql.types.TimestampType
 

U

U() - 类 中的方法org.apache.spark.mllib.linalg.SingularValueDecomposition
 
udf(Function0<RT>, TypeTags.TypeTag<RT>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 0 arguments as user-defined function (UDF).
udf(Function1<A1, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 1 arguments as user-defined function (UDF).
udf(Function2<A1, A2, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 2 arguments as user-defined function (UDF).
udf(Function3<A1, A2, A3, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 3 arguments as user-defined function (UDF).
udf(Function4<A1, A2, A3, A4, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 4 arguments as user-defined function (UDF).
udf(Function5<A1, A2, A3, A4, A5, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 5 arguments as user-defined function (UDF).
udf(Function6<A1, A2, A3, A4, A5, A6, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 6 arguments as user-defined function (UDF).
udf(Function7<A1, A2, A3, A4, A5, A6, A7, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 7 arguments as user-defined function (UDF).
udf(Function8<A1, A2, A3, A4, A5, A6, A7, A8, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 8 arguments as user-defined function (UDF).
udf(Function9<A1, A2, A3, A4, A5, A6, A7, A8, A9, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 9 arguments as user-defined function (UDF).
udf(Function10<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Scala closure of 10 arguments as user-defined function (UDF).
udf(UDF0<?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF0 instance as user-defined function (UDF).
udf(UDF1<?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF1 instance as user-defined function (UDF).
udf(UDF2<?, ?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF2 instance as user-defined function (UDF).
udf(UDF3<?, ?, ?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF3 instance as user-defined function (UDF).
udf(UDF4<?, ?, ?, ?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF4 instance as user-defined function (UDF).
udf(UDF5<?, ?, ?, ?, ?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF5 instance as user-defined function (UDF).
udf(UDF6<?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF6 instance as user-defined function (UDF).
udf(UDF7<?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF7 instance as user-defined function (UDF).
udf(UDF8<?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF8 instance as user-defined function (UDF).
udf(UDF9<?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF9 instance as user-defined function (UDF).
udf(UDF10<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a Java UDF10 instance as user-defined function (UDF).
udf(Object, DataType) - 类 中的静态方法org.apache.spark.sql.functions
Defines a deterministic user-defined function (UDF) using a Scala closure.
udf() - 类 中的方法org.apache.spark.sql.SparkSession
A collection of methods for registering user-defined functions (UDF).
udf() - 类 中的方法org.apache.spark.sql.SQLContext
A collection of methods for registering user-defined functions (UDF).
UDF0<R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 0 arguments.
UDF1<T1,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 1 arguments.
UDF10<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 10 arguments.
UDF11<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 11 arguments.
UDF12<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 12 arguments.
UDF13<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 13 arguments.
UDF14<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 14 arguments.
UDF15<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 15 arguments.
UDF16<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 16 arguments.
UDF17<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 17 arguments.
UDF18<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 18 arguments.
UDF19<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 19 arguments.
UDF2<T1,T2,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 2 arguments.
UDF20<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 20 arguments.
UDF21<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20,T21,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 21 arguments.
UDF22<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20,T21,T22,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 22 arguments.
UDF3<T1,T2,T3,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 3 arguments.
UDF4<T1,T2,T3,T4,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 4 arguments.
UDF5<T1,T2,T3,T4,T5,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 5 arguments.
UDF6<T1,T2,T3,T4,T5,T6,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 6 arguments.
UDF7<T1,T2,T3,T4,T5,T6,T7,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 7 arguments.
UDF8<T1,T2,T3,T4,T5,T6,T7,T8,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 8 arguments.
UDF9<T1,T2,T3,T4,T5,T6,T7,T8,T9,R> - org.apache.spark.sql.api.java中的接口
A Spark SQL UDF that has 9 arguments.
UDFRegistration - org.apache.spark.sql中的类
Functions for registering user-defined functions.
UDTRegistration - org.apache.spark.sql.types中的类
This object keeps the mappings between user classes and their User Defined Types (UDTs).
UDTRegistration() - 类 的构造器org.apache.spark.sql.types.UDTRegistration
 
UI - org.apache.spark.internal.config中的类
 
UI() - 类 的构造器org.apache.spark.internal.config.UI
 
UI_ALLOW_FRAMING_FROM() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_CONSOLE_PROGRESS_UPDATE_INTERVAL() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_FILTERS() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_KILL_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_PORT() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_PROMETHEUS_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_REQUEST_HEADER_SIZE() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_REVERSE_PROXY() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_REVERSE_PROXY_URL() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_SHOW_CONSOLE_PROGRESS() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_STRICT_TRANSPORT_SECURITY() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_THREAD_DUMPS_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_TIMELINE_TASKS_MAXIMUM() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_VIEW_ACLS() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_VIEW_ACLS_GROUPS() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_X_CONTENT_TYPE_OPTIONS() - 类 中的静态方法org.apache.spark.internal.config.UI
 
UI_X_XSS_PROTECTION() - 类 中的静态方法org.apache.spark.internal.config.UI
 
uid() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
uid() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
uid() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
uid() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
uid() - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
uid() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
uid() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
uid() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
uid() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
uid() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
uid() - 类 中的方法org.apache.spark.ml.classification.NaiveBayes
 
uid() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
uid() - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
uid() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
uid() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
uid() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
uid() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeans
 
uid() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
uid() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixture
 
uid() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
uid() - 类 中的方法org.apache.spark.ml.clustering.KMeans
 
uid() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
 
uid() - 类 中的方法org.apache.spark.ml.clustering.LDA
 
uid() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
uid() - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
uid() - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
uid() - 类 中的方法org.apache.spark.ml.evaluation.ClusteringEvaluator
 
uid() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
uid() - 类 中的方法org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
uid() - 类 中的方法org.apache.spark.ml.evaluation.RankingEvaluator
 
uid() - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
uid() - 类 中的方法org.apache.spark.ml.feature.Binarizer
 
uid() - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
uid() - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.Bucketizer
 
uid() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelector
 
uid() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.ColumnPruner
 
uid() - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
uid() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.DCT
 
uid() - 类 中的方法org.apache.spark.ml.feature.ElementwiseProduct
 
uid() - 类 中的方法org.apache.spark.ml.feature.FeatureHasher
 
uid() - 类 中的方法org.apache.spark.ml.feature.HashingTF
 
uid() - 类 中的方法org.apache.spark.ml.feature.IDF
 
uid() - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.Imputer
 
uid() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.IndexToString
 
uid() - 类 中的方法org.apache.spark.ml.feature.Interaction
 
uid() - 类 中的方法org.apache.spark.ml.feature.MaxAbsScaler
 
uid() - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.MinHashLSH
 
uid() - 类 中的方法org.apache.spark.ml.feature.MinHashLSHModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.MinMaxScaler
 
uid() - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.NGram
 
uid() - 类 中的方法org.apache.spark.ml.feature.Normalizer
 
uid() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoder
 
uid() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.PCA
 
uid() - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.PolynomialExpansion
 
uid() - 类 中的方法org.apache.spark.ml.feature.QuantileDiscretizer
 
uid() - 类 中的方法org.apache.spark.ml.feature.RegexTokenizer
 
uid() - 类 中的方法org.apache.spark.ml.feature.RFormula
 
uid() - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
uid() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.SQLTransformer
 
uid() - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
uid() - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.StopWordsRemover
 
uid() - 类 中的方法org.apache.spark.ml.feature.StringIndexer
 
uid() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.Tokenizer
 
uid() - 类 中的方法org.apache.spark.ml.feature.VectorAssembler
 
uid() - 类 中的方法org.apache.spark.ml.feature.VectorAttributeRewriter
 
uid() - 类 中的方法org.apache.spark.ml.feature.VectorIndexer
 
uid() - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
uid() - 类 中的方法org.apache.spark.ml.feature.VectorSizeHint
 
uid() - 类 中的方法org.apache.spark.ml.feature.VectorSlicer
 
uid() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
uid() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
uid() - 类 中的方法org.apache.spark.ml.fpm.FPGrowth
 
uid() - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
uid() - 类 中的方法org.apache.spark.ml.fpm.PrefixSpan
 
uid() - 类 中的方法org.apache.spark.ml.Pipeline
 
uid() - 类 中的方法org.apache.spark.ml.PipelineModel
 
uid() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
uid() - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
uid() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegression
 
uid() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
uid() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
uid() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
uid() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
uid() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
uid() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
uid() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
uid() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
uid() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
uid() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
uid() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
uid() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
uid() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
uid() - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
uid() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
uid() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
uid() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
uid() - 接口 中的方法org.apache.spark.ml.util.Identifiable
An immutable unique ID for the object and its derivatives.
uiRoot() - 接口 中的方法org.apache.spark.status.api.v1.ApiRequestContext
 
UIRoot - org.apache.spark.status.api.v1中的接口
This trait is shared by the all the root containers for application UI information -- the HistoryServer and the application UI.
uiRoot(HttpServletRequest) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
UIRootFromServletContext - org.apache.spark.status.api.v1中的类
 
UIRootFromServletContext() - 类 的构造器org.apache.spark.status.api.v1.UIRootFromServletContext
 
UIUtils - org.apache.spark.streaming.ui中的类
 
UIUtils() - 类 的构造器org.apache.spark.streaming.ui.UIUtils
 
UIUtils - org.apache.spark.ui中的类
Utility functions for generating XML pages with spark content.
UIUtils() - 类 的构造器org.apache.spark.ui.UIUtils
 
uiWebUrl() - 类 中的方法org.apache.spark.SparkContext
 
UIWorkloadGenerator - org.apache.spark.ui中的类
Continuously generates jobs that expose various features of the WebUI (internal testing tool).
UIWorkloadGenerator() - 类 的构造器org.apache.spark.ui.UIWorkloadGenerator
 
unapply(EdgeContext<VD, ED, A>) - 类 中的静态方法org.apache.spark.graphx.EdgeContext
Extractor mainly used for Graph#aggregateMessages*.
unapply(DenseVector) - 类 中的静态方法org.apache.spark.ml.linalg.DenseVector
Extracts the value array from a dense vector.
unapply(SparseVector) - 类 中的静态方法org.apache.spark.ml.linalg.SparseVector
 
unapply(DenseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.DenseVector
Extracts the value array from a dense vector.
unapply(SparseVector) - 类 中的静态方法org.apache.spark.mllib.linalg.SparseVector
 
unapply(Column) - 类 中的静态方法org.apache.spark.sql.Column
 
unapply(Seq<String>) - 类 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier$
 
unapply(Seq<String>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier
 
unapply(Seq<String>) - 类 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog.AsTemporaryViewIdentifier$
 
unapply(Seq<String>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.LookupCatalog.AsTemporaryViewIdentifier
 
unapply(Seq<String>) - 类 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifierParts$
 
unapply(Seq<String>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifierParts
 
unapply(Seq<String>) - 类 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace$
 
unapply(Seq<String>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace
 
unapply(Seq<String>) - 类 中的方法org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogObjectIdentifier$
 
unapply(Seq<String>) - 类 中的静态方法org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogObjectIdentifier
 
unapply(Literal<T>) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Lit
 
unapply(Transform) - 类 中的静态方法org.apache.spark.sql.connector.expressions.NamedTransform
 
unapply(NamedReference) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Ref
 
unapply(Expression) - 类 中的方法org.apache.spark.sql.types.DecimalType.Expression$
 
unapply(DecimalType) - 类 中的方法org.apache.spark.sql.types.DecimalType.Fixed$
 
unapply(DataType) - 类 中的静态方法org.apache.spark.sql.types.DecimalType
 
unapply(Expression) - 类 中的静态方法org.apache.spark.sql.types.DecimalType
 
unapply(Expression) - 类 中的静态方法org.apache.spark.sql.types.NumericType
Enables matching against NumericType for expressions: case Cast(child @ NumericType(), StringType) => ...
unapply(Throwable) - 类 中的静态方法org.apache.spark.util.CausedBy
 
unapply(String) - 类 中的静态方法org.apache.spark.util.IntParam
 
unapply(String) - 类 中的静态方法org.apache.spark.util.MemoryParam
 
UnaryTransformer<IN,OUT,T extends UnaryTransformer<IN,OUT,T>> - org.apache.spark.ml中的类
:: DeveloperApi :: Abstract class for transformers that take one input column, apply transformation, and output the result as a new column.
UnaryTransformer() - 类 的构造器org.apache.spark.ml.UnaryTransformer
 
unbase64(Column) - 类 中的静态方法org.apache.spark.sql.functions
Decodes a BASE64 encoded string column and returns it as a binary column.
unboundedFollowing() - 类 中的静态方法org.apache.spark.sql.expressions.Window
Value representing the last row in the partition, equivalent to "UNBOUNDED FOLLOWING" in SQL.
unboundedPreceding() - 类 中的静态方法org.apache.spark.sql.expressions.Window
Value representing the first row in the partition, equivalent to "UNBOUNDED PRECEDING" in SQL.
unbroadcast(long, boolean, boolean) - 接口 中的方法org.apache.spark.broadcast.BroadcastFactory
 
uncacheTable(String) - 类 中的方法org.apache.spark.sql.catalog.Catalog
Removes the specified table from the in-memory cache.
uncacheTable(String) - 类 中的方法org.apache.spark.sql.SQLContext
Removes the specified table from the in-memory cache.
UNCAUGHT_EXCEPTION() - 类 中的静态方法org.apache.spark.util.SparkExitCode
The default uncaught exception handler was reached.
UNCAUGHT_EXCEPTION_TWICE() - 类 中的静态方法org.apache.spark.util.SparkExitCode
The default uncaught exception handler was called and an exception was encountered while
UNCOMPRESSED_LOG_FILE_LENGTH_CACHE_SIZE_CONF() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
undefinedImageType() - 类 中的静态方法org.apache.spark.ml.image.ImageSchema
 
underlyingSplit() - 类 中的方法org.apache.spark.scheduler.SplitInfo
 
unhandledFilters(Filter[]) - 类 中的方法org.apache.spark.sql.sources.BaseRelation
Returns the list of Filters that this datasource may not be able to handle.
unhex(Column) - 类 中的静态方法org.apache.spark.sql.functions
Inverse of hex.
UniformGenerator - org.apache.spark.mllib.random中的类
:: DeveloperApi :: Generates i.i.d. samples from U[0.0, 1.0]
UniformGenerator() - 类 的构造器org.apache.spark.mllib.random.UniformGenerator
 
uniformJavaRDD(JavaSparkContext, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.uniformRDD.
uniformJavaRDD(JavaSparkContext, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.uniformJavaRDD with the default seed.
uniformJavaRDD(JavaSparkContext, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.uniformJavaRDD with the default number of partitions and the default seed.
uniformJavaVectorRDD(JavaSparkContext, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Java-friendly version of RandomRDDs.uniformVectorRDD.
uniformJavaVectorRDD(JavaSparkContext, long, int, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.uniformJavaVectorRDD with the default seed.
uniformJavaVectorRDD(JavaSparkContext, long, int) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
RandomRDDs.uniformJavaVectorRDD with the default number of partitions and the default seed.
uniformRDD(SparkContext, long, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD comprised of i.i.d.
uniformVectorRDD(SparkContext, long, int, int, long) - 类 中的静态方法org.apache.spark.mllib.random.RandomRDDs
Generates an RDD[Vector] with vectors containing i.i.d.
union(JavaDoubleRDD) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Return the union of this RDD and another one.
union(JavaPairRDD<K, V>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return the union of this RDD and another one.
union(JavaRDD<T>) - 类 中的方法org.apache.spark.api.java.JavaRDD
Return the union of this RDD and another one.
union(JavaRDD<T>...) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Build the union of JavaRDDs.
union(JavaPairRDD<K, V>...) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Build the union of JavaPairRDDs.
union(JavaDoubleRDD...) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Build the union of JavaDoubleRDDs.
union(Seq<JavaRDD<T>>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Build the union of JavaRDDs.
union(Seq<JavaPairRDD<K, V>>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Build the union of JavaPairRDDs.
union(Seq<JavaDoubleRDD>) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Build the union of JavaDoubleRDDs.
union(RDD<T>) - 类 中的方法org.apache.spark.rdd.RDD
Return the union of this RDD and another one.
union(Seq<RDD<T>>, ClassTag<T>) - 类 中的方法org.apache.spark.SparkContext
Build the union of a list of RDDs.
union(RDD<T>, Seq<RDD<T>>, ClassTag<T>) - 类 中的方法org.apache.spark.SparkContext
Build the union of a list of RDDs passed as variable-length arguments.
union(Dataset<T>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset containing union of rows in this Dataset and another Dataset.
union(JavaDStream<T>) - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
Return a new DStream by unifying data of another DStream with this DStream.
union(JavaPairDStream<K, V>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by unifying data of another DStream with this DStream.
union(JavaDStream<T>...) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create a unified DStream from multiple DStreams of the same type and same slide duration.
union(JavaPairDStream<K, V>...) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create a unified DStream from multiple DStreams of the same type and same slide duration.
union(Seq<JavaDStream<T>>) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create a unified DStream from multiple DStreams of the same type and same slide duration.
union(Seq<JavaPairDStream<K, V>>) - 类 中的方法org.apache.spark.streaming.api.java.JavaStreamingContext
Create a unified DStream from multiple DStreams of the same type and same slide duration.
union(DStream<T>) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream by unifying data of another DStream with this DStream.
union(Seq<DStream<T>>, ClassTag<T>) - 类 中的方法org.apache.spark.streaming.StreamingContext
Create a unified DStream from multiple DStreams of the same type and same slide duration.
unionAll(Dataset<T>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset containing union of rows in this Dataset and another Dataset.
unionByName(Dataset<T>) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset containing union of rows in this Dataset and another Dataset.
UnionRDD<T> - org.apache.spark.rdd中的类
 
UnionRDD(SparkContext, Seq<RDD<T>>, ClassTag<T>) - 类 的构造器org.apache.spark.rdd.UnionRDD
 
uniqueId() - 类 中的方法org.apache.spark.storage.StreamBlockId
 
unix_timestamp() - 类 中的静态方法org.apache.spark.sql.functions
Returns the current Unix timestamp (in seconds) as a long.
unix_timestamp(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts time string in format uuuu-MM-dd HH:mm:ss to Unix timestamp (in seconds), using the default timezone and the default locale.
unix_timestamp(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Converts time string with given pattern to Unix timestamp (in seconds).
UnknownReason - org.apache.spark中的类
:: DeveloperApi :: We don't know why the task ended -- for example, because of a ClassNotFound exception when deserializing the task result.
UnknownReason() - 类 的构造器org.apache.spark.UnknownReason
 
UNLIMITED_DECIMAL_PRECISION() - 类 中的静态方法org.apache.spark.sql.hive.HiveShim
 
UNLIMITED_DECIMAL_SCALE() - 类 中的静态方法org.apache.spark.sql.hive.HiveShim
 
unlink(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
 
unlink(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
 
unlink(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
 
unlink(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
 
unlink(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
 
unlink(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
 
unlink(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
 
UNORDERED() - 类 中的静态方法org.apache.spark.rdd.DeterministicLevel
 
unpersist() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
unpersist(boolean) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
unpersist() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
unpersist(boolean) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
unpersist() - 类 中的方法org.apache.spark.api.java.JavaRDD
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
unpersist(boolean) - 类 中的方法org.apache.spark.api.java.JavaRDD
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
unpersist() - 类 中的方法org.apache.spark.broadcast.Broadcast
Asynchronously delete cached copies of this broadcast on the executors.
unpersist(boolean) - 类 中的方法org.apache.spark.broadcast.Broadcast
Delete cached copies of this broadcast on the executors.
unpersist(boolean) - 类 中的方法org.apache.spark.graphx.Graph
Uncaches both vertices and edges of this graph.
unpersist(boolean) - 类 中的方法org.apache.spark.graphx.impl.EdgeRDDImpl
 
unpersist(boolean) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
unpersist(boolean) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
unpersist() - 类 中的方法org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Unpersist intermediate RDDs used in the computation.
unpersist(boolean) - 类 中的方法org.apache.spark.rdd.RDD
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
unpersist(boolean) - 类 中的方法org.apache.spark.sql.Dataset
Mark the Dataset as non-persistent, and remove all blocks for it from memory and disk.
unpersist() - 类 中的方法org.apache.spark.sql.Dataset
Mark the Dataset as non-persistent, and remove all blocks for it from memory and disk.
unpersistRDDFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
unpersistRDDToJson(SparkListenerUnpersistRDD) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
unpersistVertices(boolean) - 类 中的方法org.apache.spark.graphx.Graph
Uncaches only the vertices of this graph, leaving the edges alone.
unpersistVertices(boolean) - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
UnrecognizedBlockId - org.apache.spark.storage中的异常错误
 
UnrecognizedBlockId(String) - 异常错误 的构造器org.apache.spark.storage.UnrecognizedBlockId
 
unregister(String) - 类 中的方法org.apache.spark.rpc.netty.DedicatedMessageLoop
 
unregister(String) - 类 中的方法org.apache.spark.rpc.netty.MessageLoop
 
unregister(String) - 类 中的方法org.apache.spark.rpc.netty.SharedMessageLoop
 
unregister(QueryExecutionListener) - 类 中的方法org.apache.spark.sql.util.ExecutionListenerManager
Unregisters the specified QueryExecutionListener.
unregisterDialect(JdbcDialect) - 类 中的静态方法org.apache.spark.sql.jdbc.JdbcDialects
Unregister a dialect.
Unresolved() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeType
Unresolved type.
UnresolvedAttribute - org.apache.spark.ml.attribute中的类
:: DeveloperApi :: An unresolved attribute.
UnresolvedAttribute() - 类 的构造器org.apache.spark.ml.attribute.UnresolvedAttribute
 
unset() - 类 中的静态方法org.apache.spark.rdd.InputFileBlockHolder
Clears the input file block to default value.
unset(String) - 类 中的方法org.apache.spark.sql.RuntimeConfig
Resets the configuration property for the given key.
until(Time, Duration) - 类 中的方法org.apache.spark.streaming.Time
 
unwrapOrcStructs(Configuration, StructType, StructType, Option<StructObjectInspector>, Iterator<Writable>) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
unwrapperFor(ObjectInspector) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
Builds unwrappers ahead of time according to object inspector types to avoid pattern matching and branching costs per row.
unwrapperFor(StructField) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
Builds unwrappers ahead of time according to object inspector types to avoid pattern matching and branching costs per row.
unwrapperFor(ObjectInspector) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
unwrapperFor(StructField) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
update(int, int, double) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Update element at (i, j)
update(Function1<Object, Object>) - 接口 中的方法org.apache.spark.ml.linalg.Matrix
Update all the values of this matrix using the function f.
update(RDD<Vector>, double, String) - 类 中的方法org.apache.spark.mllib.clustering.StreamingKMeansModel
Perform a k-means update on a batch of data.
update(int, int, double) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Update element at (i, j)
update(Function1<Object, Object>) - 接口 中的方法org.apache.spark.mllib.linalg.Matrix
Update all the values of this matrix using the function f.
update() - 类 中的方法org.apache.spark.scheduler.AccumulableInfo
 
update(int, Object) - 类 中的方法org.apache.spark.sql.expressions.MutableAggregationBuffer
Update the ith value of this buffer.
update(MutableAggregationBuffer, Row) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Updates the given aggregation buffer buffer with new input data from input.
update(S) - 接口 中的方法org.apache.spark.sql.streaming.GroupState
Update the value of the state.
Update() - 类 中的静态方法org.apache.spark.sql.streaming.OutputMode
OutputMode in which only the rows that were updated in the streaming DataFrame/Dataset will be written to the sink every time there are some updates.
update(int, Object) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarArray
 
update(int, Object) - 类 中的方法org.apache.spark.sql.vectorized.ColumnarRow
 
update() - 类 中的方法org.apache.spark.status.api.v1.AccumulableInfo
 
update(Seq<String>, long, long) - 类 中的方法org.apache.spark.status.LiveRDDPartition
 
update(S) - 类 中的方法org.apache.spark.streaming.State
Update the state with a new value.
update(T1, T2) - 类 中的方法org.apache.spark.util.MutablePair
Updates this pair with new values and returns itself
UPDATE_INTERVAL_S() - 类 中的静态方法org.apache.spark.internal.config.History
 
UpdateBlockInfo(BlockManagerId, BlockId, StorageLevel, long, long) - 类 的构造器org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
UpdateBlockInfo() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
UpdateBlockInfo$() - 类 的构造器org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo$
 
updateColumnComment(String[], String) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for updating the comment of a field.
updateColumnType(String[], DataType) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for updating the type of a field that is nullable.
updateColumnType(String[], DataType, boolean) - 接口 中的静态方法org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for updating the type of a field.
UPDATED_BLOCK_STATUSES() - 类 中的静态方法org.apache.spark.InternalAccumulator
 
UpdateDelegationTokens(byte[]) - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
 
UpdateDelegationTokens$() - 类 的构造器org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens$
 
updateMetrics(TaskMetrics) - 类 中的方法org.apache.spark.status.LiveTask
Update the metrics for the task and return the difference between the previous and new values.
updatePrediction(Vector, double, DecisionTreeRegressionModel, double) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
Add prediction from a new boosting iteration to an existing prediction.
updatePredictionError(RDD<org.apache.spark.ml.feature.Instance>, RDD<Tuple2<Object, Object>>, double, DecisionTreeRegressionModel, Loss) - 类 中的静态方法org.apache.spark.ml.tree.impl.GradientBoostedTrees
Update a zipped predictionError RDD (as obtained with computeInitialPredictionAndError)
updatePredictionError(RDD<LabeledPoint>, RDD<Tuple2<Object, Object>>, double, DecisionTreeModel, Loss) - 类 中的静态方法org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
:: DeveloperApi :: Update a zipped predictionError RDD (as obtained with computeInitialPredictionAndError)
Updater - org.apache.spark.mllib.optimization中的类
:: DeveloperApi :: Class used to perform steps (weight update) using Gradient Descent methods.
Updater() - 类 的构造器org.apache.spark.mllib.optimization.Updater
 
updateSparkConfigFromProperties(SparkConf, Map<String, String>) - 类 中的静态方法org.apache.spark.util.Utils
Updates Spark config with properties from a set of Properties.
updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>>, int) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>>, Partitioner) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>>, Partitioner, JavaPairRDD<K, S>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
updateStateByKey(Function2<Seq<V>, Option<S>, Option<S>>, ClassTag<S>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
updateStateByKey(Function2<Seq<V>, Option<S>, Option<S>>, int, ClassTag<S>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
updateStateByKey(Function2<Seq<V>, Option<S>, Option<S>>, Partitioner, ClassTag<S>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
updateStateByKey(Function1<Iterator<Tuple3<K, Seq<V>, Option<S>>>, Iterator<Tuple2<K, S>>>, Partitioner, boolean, ClassTag<S>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
updateStateByKey(Function2<Seq<V>, Option<S>, Option<S>>, Partitioner, RDD<Tuple2<K, S>>, ClassTag<S>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
updateStateByKey(Function1<Iterator<Tuple3<K, Seq<V>, Option<S>>>, Iterator<Tuple2<K, S>>>, Partitioner, boolean, RDD<Tuple2<K, S>>, ClassTag<S>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
updateStateByKey(Function4<Time, K, Seq<V>, Option<S>, Option<S>>, Partitioner, boolean, Option<RDD<Tuple2<K, S>>>, ClassTag<S>) - 类 中的方法org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
upper() - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
upper() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
upper() - 接口 中的方法org.apache.spark.ml.feature.RobustScalerParams
Upper quantile to calculate quantile range, shared by all features Default: 0.75
upper(Column) - 类 中的静态方法org.apache.spark.sql.functions
Converts a string column to upper case.
upperBoundsOnCoefficients() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
upperBoundsOnCoefficients() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
upperBoundsOnCoefficients() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
The upper bounds on coefficients if fitting under bound constrained optimization.
upperBoundsOnIntercepts() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
upperBoundsOnIntercepts() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
upperBoundsOnIntercepts() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
The upper bounds on intercepts if fitting under bound constrained optimization.
useCommitCoordinator() - 接口 中的方法org.apache.spark.sql.connector.write.BatchWrite
Returns whether Spark should use the commit coordinator to ensure that at most one task for each partition commits.
useDisk() - 类 中的方法org.apache.spark.storage.StorageLevel
 
usedOffHeap() - 类 中的方法org.apache.spark.status.LiveExecutor
 
usedOffHeapStorageMemory() - 接口 中的方法org.apache.spark.SparkExecutorInfo
 
usedOffHeapStorageMemory() - 类 中的方法org.apache.spark.SparkExecutorInfoImpl
 
usedOffHeapStorageMemory() - 类 中的方法org.apache.spark.status.api.v1.MemoryMetrics
 
usedOnHeap() - 类 中的方法org.apache.spark.status.LiveExecutor
 
usedOnHeapStorageMemory() - 接口 中的方法org.apache.spark.SparkExecutorInfo
 
usedOnHeapStorageMemory() - 类 中的方法org.apache.spark.SparkExecutorInfoImpl
 
usedOnHeapStorageMemory() - 类 中的方法org.apache.spark.status.api.v1.MemoryMetrics
 
useDst - 类 中的变量org.apache.spark.graphx.TripletFields
Indicates whether the destination vertex attribute is included.
useEdge - 类 中的变量org.apache.spark.graphx.TripletFields
Indicates whether the edge attribute is included.
useMemory() - 类 中的方法org.apache.spark.storage.StorageLevel
 
useNodeIdCache() - 类 中的方法org.apache.spark.mllib.tree.configuration.Strategy
 
useOffHeap() - 类 中的方法org.apache.spark.storage.StorageLevel
 
user() - 类 中的方法org.apache.spark.ml.recommendation.ALS.Rating
 
user() - 类 中的方法org.apache.spark.mllib.recommendation.Rating
 
USER_DEFAULT() - 类 中的静态方法org.apache.spark.sql.types.DecimalType
 
USER_GROUPS_MAPPING() - 类 中的静态方法org.apache.spark.internal.config.UI
 
userClass() - 类 中的方法org.apache.spark.mllib.linalg.VectorUDT
 
userCol() - 类 中的方法org.apache.spark.ml.recommendation.ALS
 
userCol() - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
userCol() - 接口 中的方法org.apache.spark.ml.recommendation.ALSModelParams
Param for the column name for user ids.
UserDefinedAggregateFunction - org.apache.spark.sql.expressions中的类
The base class for implementing user-defined aggregate functions (UDAF).
UserDefinedAggregateFunction() - 类 的构造器org.apache.spark.sql.expressions.UserDefinedAggregateFunction
 
UserDefinedFunction - org.apache.spark.sql.expressions中的类
A user-defined function.
UserDefinedFunction() - 类 的构造器org.apache.spark.sql.expressions.UserDefinedFunction
 
userFactors() - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
userFeatures() - 类 中的方法org.apache.spark.mllib.recommendation.MatrixFactorizationModel
 
userName() - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the user name which is used as owner for Hive table.
userPort(int, int) - 类 中的静态方法org.apache.spark.util.Utils
Returns the user port to try when trying to bind a service.
useSrc - 类 中的变量org.apache.spark.graphx.TripletFields
Indicates whether the source vertex attribute is included.
using(String) - 接口 中的方法org.apache.spark.sql.CreateTableWriter
Specifies a provider for the underlying output data source.
using(String) - 类 中的方法org.apache.spark.sql.DataFrameWriterV2
 
usingBoundConstrainedOptimization() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
 
Utils - org.apache.spark.ml.impl中的类
 
Utils() - 类 的构造器org.apache.spark.ml.impl.Utils
 
Utils - org.apache.spark.util中的类
Various utility methods used by Spark.
Utils() - 类 的构造器org.apache.spark.util.Utils
 
UUIDFromJson(JsonAST.JValue) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 
UUIDToJson(UUID) - 类 中的静态方法org.apache.spark.util.JsonProtocol
 

V

V() - 类 中的方法org.apache.spark.mllib.linalg.SingularValueDecomposition
 
V1WriteBuilder - org.apache.spark.sql.connector.write中的接口
A trait that should be implemented by V1 DataSources that would like to leverage the DataSource V2 write code paths.
validate() - 类 中的方法org.apache.spark.mllib.linalg.distributed.BlockMatrix
Validates the block matrix info against the matrix data (blocks) and throws an exception if any error is found.
validateAndTransformField(StructType, String, String) - 接口 中的方法org.apache.spark.ml.feature.StringIndexerBase
 
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.classification.ClassifierParams
 
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionParams
 
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.classification.ProbabilisticClassifierParams
 
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.clustering.BisectingKMeansParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.clustering.GaussianMixtureParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.clustering.KMeansParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.clustering.LDAParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.IDFBase
Validate and transform the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.ImputerParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.LSHParams
Transform the Schema for LSH
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.MaxAbsScalerParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.MinMaxScalerParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType, boolean, boolean) - 接口 中的方法org.apache.spark.ml.feature.OneHotEncoderBase
 
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.PCAParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.RobustScalerParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.StandardScalerParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType, boolean) - 接口 中的方法org.apache.spark.ml.feature.StringIndexerBase
Validates and transforms the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
Validate and transform the input schema.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.fpm.FPGrowthParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.PredictorParams
Validates and transforms the input schema with the provided param map.
validateAndTransformSchema(StructType) - 接口 中的方法org.apache.spark.ml.recommendation.ALSParams
Validates and transforms the input schema.
validateAndTransformSchema(StructType, boolean) - 接口 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionParams
Validates and transforms the input schema with the provided param map.
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
 
validateAndTransformSchema(StructType, boolean) - 接口 中的方法org.apache.spark.ml.regression.IsotonicRegressionBase
Validates and transforms input schema.
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.regression.LinearRegressionParams
 
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeClassifierParams
 
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.tree.DecisionTreeRegressorParams
 
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleClassifierParams
 
validateAndTransformSchema(StructType, boolean, DataType) - 接口 中的方法org.apache.spark.ml.tree.TreeEnsembleRegressorParams
 
validateDirectoryUri(String) - 接口 中的方法org.apache.spark.rpc.RpcEnvFileServer
Validates and normalizes the base URI for directories.
validateStages(PipelineStage[]) - 类 中的方法org.apache.spark.ml.Pipeline.SharedReadWrite$
Check that all stages are Writable
validateURL(URI) - 类 中的静态方法org.apache.spark.util.Utils
Validate that a given URI is actually a valid URL as well.
validateVectorCompatibleColumn(StructType, String) - 类 中的静态方法org.apache.spark.ml.util.SchemaUtils
Check whether the given column in the schema is one of the supporting vector type: Vector, Array[Float].
validationIndicatorCol() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
validationIndicatorCol() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
validationIndicatorCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasValidationIndicatorCol
Param for name of the column that indicates whether each row is for training or for validation.
validationIndicatorCol() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
validationIndicatorCol() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
validationMetrics() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
validationTol() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
validationTol() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
validationTol() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
validationTol() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
validationTol() - 接口 中的方法org.apache.spark.ml.tree.GBTParams
Threshold for stopping early when fit with validation is used.
validationTol() - 类 中的方法org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
ValidatorParams - org.apache.spark.ml.tuning中的接口
value() - 类 中的方法org.apache.spark.broadcast.Broadcast
Get the broadcasted value.
value() - 类 中的方法org.apache.spark.ComplexFutureAction
 
value() - 接口 中的方法org.apache.spark.FutureAction
The value of this Future.
value() - 类 中的方法org.apache.spark.ml.param.ParamPair
 
value() - 类 中的方法org.apache.spark.mllib.linalg.distributed.MatrixEntry
 
value() - 类 中的方法org.apache.spark.mllib.stat.test.BinarySample
 
value() - 类 中的方法org.apache.spark.scheduler.AccumulableInfo
 
value() - 类 中的方法org.apache.spark.SerializableWritable
 
value() - 类 中的方法org.apache.spark.SimpleFutureAction
 
value() - 类 中的方法org.apache.spark.sql.connector.catalog.NamespaceChange.SetProperty
 
value() - 类 中的方法org.apache.spark.sql.connector.catalog.TableChange.SetProperty
 
value() - 接口 中的方法org.apache.spark.sql.connector.expressions.Literal
Returns the literal value.
value() - 类 中的方法org.apache.spark.sql.sources.EqualNullSafe
 
value() - 类 中的方法org.apache.spark.sql.sources.EqualTo
 
value() - 类 中的方法org.apache.spark.sql.sources.GreaterThan
 
value() - 类 中的方法org.apache.spark.sql.sources.GreaterThanOrEqual
 
value() - 类 中的方法org.apache.spark.sql.sources.LessThan
 
value() - 类 中的方法org.apache.spark.sql.sources.LessThanOrEqual
 
value() - 类 中的方法org.apache.spark.sql.sources.StringContains
 
value() - 类 中的方法org.apache.spark.sql.sources.StringEndsWith
 
value() - 类 中的方法org.apache.spark.sql.sources.StringStartsWith
 
value() - 类 中的方法org.apache.spark.status.api.v1.AccumulableInfo
 
value() - 类 中的方法org.apache.spark.status.LiveRDDPartition
 
value() - 类 中的方法org.apache.spark.storage.memory.DeserializedMemoryEntry
 
value() - 类 中的方法org.apache.spark.util.AccumulatorV2
Defines the current value of this accumulator
value() - 类 中的方法org.apache.spark.util.CollectionAccumulator
 
value() - 类 中的方法org.apache.spark.util.DoubleAccumulator
 
value() - 类 中的方法org.apache.spark.util.LongAccumulator
 
value() - 类 中的方法org.apache.spark.util.SerializableConfiguration
 
valueArray() - 类 中的方法org.apache.spark.sql.vectorized.ColumnarMap
 
valueContainsNull() - 类 中的方法org.apache.spark.sql.types.MapType
 
valueOf(String) - 枚举 中的静态方法org.apache.spark.graphx.impl.EdgeActiveness
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.JobExecutionStatus
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.launcher.SparkAppHandle.State
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.sql.connector.catalog.TableCapability
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.sql.SaveMode
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.status.api.v1.ApplicationStatus
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.status.api.v1.StageStatus
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.status.api.v1.streaming.BatchStatus
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.status.api.v1.TaskSorting
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.streaming.StreamingContextState
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.util.sketch.BloomFilter.Version
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.spark.util.sketch.CountMinSketch.Version
返回带有指定名称的该类型的枚举常量。
values() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
Return an RDD with the values of each tuple.
values() - 枚举 中的静态方法org.apache.spark.graphx.impl.EdgeActiveness
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.spark.JobExecutionStatus
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.spark.launcher.SparkAppHandle.State
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
VALUES() - 类 中的静态方法org.apache.spark.ml.attribute.AttributeKeys
 
values() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
values() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
values() - 类 中的方法org.apache.spark.ml.linalg.DenseMatrix
 
values() - 类 中的方法org.apache.spark.ml.linalg.DenseVector
 
values() - 类 中的方法org.apache.spark.ml.linalg.SparseMatrix
 
values() - 类 中的方法org.apache.spark.ml.linalg.SparseVector
 
values() - 类 中的方法org.apache.spark.mllib.linalg.DenseMatrix
 
values() - 类 中的方法org.apache.spark.mllib.linalg.DenseVector
 
values() - 类 中的方法org.apache.spark.mllib.linalg.SparseMatrix
 
values() - 类 中的方法org.apache.spark.mllib.linalg.SparseVector
 
values() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.Algo
 
values() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
values() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.FeatureType
 
values() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
values() - 类 中的静态方法org.apache.spark.rdd.CheckpointState
 
values() - 类 中的静态方法org.apache.spark.rdd.DeterministicLevel
 
values() - 类 中的方法org.apache.spark.rdd.PairRDDFunctions
Return an RDD with the values of each tuple.
values() - 类 中的静态方法org.apache.spark.scheduler.SchedulingMode
 
values() - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
values() - 枚举 中的静态方法org.apache.spark.sql.connector.catalog.TableCapability
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.spark.sql.SaveMode
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 类 中的方法org.apache.spark.sql.sources.In
 
values() - 类 中的方法org.apache.spark.sql.util.CaseInsensitiveStringMap
 
values() - 枚举 中的静态方法org.apache.spark.status.api.v1.ApplicationStatus
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.spark.status.api.v1.StageStatus
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.spark.status.api.v1.streaming.BatchStatus
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.spark.status.api.v1.TaskSorting
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverState
 
values() - 枚举 中的静态方法org.apache.spark.streaming.StreamingContextState
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 类 中的静态方法org.apache.spark.TaskState
 
values() - 枚举 中的静态方法org.apache.spark.util.sketch.BloomFilter.Version
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.spark.util.sketch.CountMinSketch.Version
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
ValuesHolder<T> - org.apache.spark.storage.memory中的接口
 
valueType() - 类 中的方法org.apache.spark.sql.types.MapType
 
var_pop(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the population variance of the values in a group.
var_pop(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the population variance of the values in a group.
var_samp(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the unbiased variance of the values in a group.
var_samp(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: returns the unbiased variance of the values in a group.
VarcharType - org.apache.spark.sql.types中的类
Hive varchar type.
VarcharType(int) - 类 的构造器org.apache.spark.sql.types.VarcharType
 
variance() - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
Compute the population variance of this RDD's elements.
variance(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
 
variance(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
 
variance(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
 
variance(double) - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
 
variance(Column, Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
variance(Column) - 类 中的静态方法org.apache.spark.ml.stat.Summarizer
 
variance() - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Unbiased estimate of sample variance of each dimension.
variance() - 接口 中的方法org.apache.spark.mllib.stat.MultivariateStatisticalSummary
Sample variance vector.
Variance - org.apache.spark.mllib.tree.impurity中的类
Class for calculating variance during regression
Variance() - 类 的构造器org.apache.spark.mllib.tree.impurity.Variance
 
variance() - 类 中的方法org.apache.spark.rdd.DoubleRDDFunctions
Compute the population variance of this RDD's elements.
variance(Column) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: alias for var_samp.
variance(String) - 类 中的静态方法org.apache.spark.sql.functions
Aggregate function: alias for var_samp.
variance() - 类 中的方法org.apache.spark.util.StatCounter
Return the population variance of the values.
varianceCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasVarianceCol
Param for Column name for the biased sample variance of prediction.
varianceCol() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
varianceCol() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
variancePower() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
variancePower() - 接口 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
Param for the power in the variance function of the Tweedie distribution which provides the relationship between the variance and mean of the distribution.
variancePower() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
vClassTag() - 类 中的方法org.apache.spark.api.java.JavaHadoopRDD
 
vClassTag() - 类 中的方法org.apache.spark.api.java.JavaNewHadoopRDD
 
vClassTag() - 类 中的方法org.apache.spark.api.java.JavaPairRDD
 
vClassTag() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairInputDStream
 
vClassTag() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
 
Vector - org.apache.spark.ml.linalg中的接口
Represents a numeric vector, whose index type is Int and value type is Double.
vector() - 类 中的方法org.apache.spark.mllib.linalg.distributed.IndexedRow
 
Vector - org.apache.spark.mllib.linalg中的接口
Represents a numeric vector, whose index type is Int and value type is Double.
vector() - 类 中的方法org.apache.spark.storage.memory.DeserializedValuesHolder
 
VectorAssembler - org.apache.spark.ml.feature中的类
A feature transformer that merges multiple columns into a vector column.
VectorAssembler(String) - 类 的构造器org.apache.spark.ml.feature.VectorAssembler
 
VectorAssembler() - 类 的构造器org.apache.spark.ml.feature.VectorAssembler
 
VectorAttributeRewriter - org.apache.spark.ml.feature中的类
Utility transformer that rewrites Vector attribute names via prefix replacement.
VectorAttributeRewriter(String, String, Map<String, String>) - 类 的构造器org.apache.spark.ml.feature.VectorAttributeRewriter
 
VectorAttributeRewriter(String, Map<String, String>) - 类 的构造器org.apache.spark.ml.feature.VectorAttributeRewriter
 
vectorCol() - 类 中的方法org.apache.spark.ml.feature.VectorAttributeRewriter
 
VectorImplicits - org.apache.spark.mllib.linalg中的类
Implicit methods available in Scala for converting Vector to Vector and vice versa.
VectorImplicits() - 类 的构造器org.apache.spark.mllib.linalg.VectorImplicits
 
VectorIndexer - org.apache.spark.ml.feature中的类
Class for indexing categorical feature columns in a dataset of Vector.
VectorIndexer(String) - 类 的构造器org.apache.spark.ml.feature.VectorIndexer
 
VectorIndexer() - 类 的构造器org.apache.spark.ml.feature.VectorIndexer
 
VectorIndexerModel - org.apache.spark.ml.feature中的类
Model fitted by VectorIndexer.
VectorIndexerParams - org.apache.spark.ml.feature中的接口
Private trait for params for VectorIndexer and VectorIndexerModel
Vectors - org.apache.spark.ml.linalg中的类
Factory methods for Vector.
Vectors() - 类 的构造器org.apache.spark.ml.linalg.Vectors
 
Vectors - org.apache.spark.mllib.linalg中的类
Factory methods for Vector.
Vectors() - 类 的构造器org.apache.spark.mllib.linalg.Vectors
 
vectorSize() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
vectorSize() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
The dimension of the code that you want to transform from words.
vectorSize() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
VectorSizeHint - org.apache.spark.ml.feature中的类
A feature transformer that adds size information to the metadata of a vector column.
VectorSizeHint(String) - 类 的构造器org.apache.spark.ml.feature.VectorSizeHint
 
VectorSizeHint() - 类 的构造器org.apache.spark.ml.feature.VectorSizeHint
 
VectorSlicer - org.apache.spark.ml.feature中的类
This class takes a feature vector and outputs a new feature vector with a subarray of the original features.
VectorSlicer(String) - 类 的构造器org.apache.spark.ml.feature.VectorSlicer
 
VectorSlicer() - 类 的构造器org.apache.spark.ml.feature.VectorSlicer
 
VectorTransformer - org.apache.spark.mllib.feature中的接口
:: DeveloperApi :: Trait for transformation of a vector
VectorType() - 类 中的静态方法org.apache.spark.ml.linalg.SQLDataTypes
Data type for Vector.
VectorUDT - org.apache.spark.mllib.linalg中的类
:: AlphaComponent :: User-defined type for Vector which allows easy interaction with SQL via Dataset.
VectorUDT() - 类 的构造器org.apache.spark.mllib.linalg.VectorUDT
 
VENDOR() - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
version() - 类 中的方法org.apache.spark.api.java.JavaSparkContext
The version of Spark on which this application is running.
version() - 类 中的方法org.apache.spark.SparkContext
The version of Spark on which this application is running.
version() - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Returns the Hive Version of this client.
version() - 类 中的方法org.apache.spark.sql.SparkSession
The version of Spark on which this application is running.
VersionInfo - org.apache.spark.status.api.v1中的类
 
VersionUtils - org.apache.spark.util中的类
Utilities for working with Spark version strings
VersionUtils() - 类 的构造器org.apache.spark.util.VersionUtils
 
vertcat(Matrix[]) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Vertically concatenate a sequence of matrices.
vertcat(Matrix[]) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Vertically concatenate a sequence of matrices.
vertexAttr(long) - 类 中的方法org.apache.spark.graphx.EdgeTriplet
Get the vertex object for the given vertex in the edge.
VertexPartitionBaseOpsConstructor<T extends org.apache.spark.graphx.impl.VertexPartitionBase<Object>> - org.apache.spark.graphx.impl中的接口
A typeclass for subclasses of VertexPartitionBase representing the ability to wrap them in a VertexPartitionBaseOps.
VertexRDD<VD> - org.apache.spark.graphx中的类
Extends RDD[(VertexId, VD)] by ensuring that there is only one entry for each vertex and by pre-indexing the entries for fast, efficient joins.
VertexRDD(SparkContext, Seq<Dependency<?>>) - 类 的构造器org.apache.spark.graphx.VertexRDD
 
VertexRDDImpl<VD> - org.apache.spark.graphx.impl中的类
 
vertices() - 类 中的方法org.apache.spark.graphx.Graph
An RDD containing the vertices and their associated attributes.
vertices() - 类 中的方法org.apache.spark.graphx.impl.GraphImpl
 
viewToSeq(KVStoreView<T>, int, Function1<T, Object>) - 类 中的静态方法org.apache.spark.status.KVUtils
Turns a KVStoreView into a Scala sequence, applying a filter.
visit(int, int, String, String, String, String[]) - 类 中的方法org.apache.spark.util.InnerClosureFinder
 
visitMethod(int, String, String, String, String[]) - 类 中的方法org.apache.spark.util.InnerClosureFinder
 
visitMethod(int, String, String, String, String[]) - 类 中的方法org.apache.spark.util.ReturnStatementFinder
 
vizHeaderNodes(HttpServletRequest) - 类 中的静态方法org.apache.spark.ui.UIUtils
 
vManifest() - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
 
vocabSize() - 类 中的方法org.apache.spark.ml.clustering.LDAModel
 
vocabSize() - 类 中的方法org.apache.spark.ml.feature.CountVectorizer
 
vocabSize() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
vocabSize() - 接口 中的方法org.apache.spark.ml.feature.CountVectorizerParams
Max size of the vocabulary.
vocabSize() - 类 中的方法org.apache.spark.mllib.clustering.DistributedLDAModel
 
vocabSize() - 类 中的方法org.apache.spark.mllib.clustering.LDAModel
Vocabulary size (number of terms or terms in the vocabulary)
vocabSize() - 类 中的方法org.apache.spark.mllib.clustering.LocalLDAModel
 
vocabulary() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
VocabWord - org.apache.spark.mllib.feature中的类
Entry in vocabulary
VocabWord(String, long, int[], int[], int) - 类 的构造器org.apache.spark.mllib.feature.VocabWord
 
VoidFunction<T> - org.apache.spark.api.java.function中的接口
A function with no return value.
VoidFunction2<T1,T2> - org.apache.spark.api.java.function中的接口
A two-argument function that takes arguments of type T1 and T2 with no return value.
Vote() - 类 中的静态方法org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 

W

w(boolean) - 类 中的方法org.apache.spark.ml.param.BooleanParam
Creates a param pair with the given value (for Java).
w(List<List<Double>>) - 类 中的方法org.apache.spark.ml.param.DoubleArrayArrayParam
Creates a param pair with a `java.util.List` of values (for Java and Python).
w(List<Double>) - 类 中的方法org.apache.spark.ml.param.DoubleArrayParam
Creates a param pair with a `java.util.List` of values (for Java and Python).
w(double) - 类 中的方法org.apache.spark.ml.param.DoubleParam
Creates a param pair with the given value (for Java).
w(float) - 类 中的方法org.apache.spark.ml.param.FloatParam
Creates a param pair with the given value (for Java).
w(List<Integer>) - 类 中的方法org.apache.spark.ml.param.IntArrayParam
Creates a param pair with a `java.util.List` of values (for Java and Python).
w(int) - 类 中的方法org.apache.spark.ml.param.IntParam
Creates a param pair with the given value (for Java).
w(long) - 类 中的方法org.apache.spark.ml.param.LongParam
Creates a param pair with the given value (for Java).
w(T) - 类 中的方法org.apache.spark.ml.param.Param
Creates a param pair with the given value (for Java).
w(List<String>) - 类 中的方法org.apache.spark.ml.param.StringArrayParam
Creates a param pair with a `java.util.List` of values (for Java and Python).
waitTillTime(long) - 接口 中的方法org.apache.spark.util.Clock
Wait until the wall clock reaches at least the given time.
waitUntilEmpty(long) - 类 中的方法org.apache.spark.scheduler.AsyncEventQueue
For testing only.
warmUp(SparkContext) - 类 中的静态方法org.apache.spark.streaming.util.RawTextHelper
Warms up the SparkContext in master and slave by running tasks to force JIT kick in before real workload starts.
weakIntern(String) - 类 中的静态方法org.apache.spark.status.LiveEntityHelpers
String interning to reduce the memory usage.
weekofyear(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the week number as an integer from a given date/timestamp/string.
WeibullGenerator - org.apache.spark.mllib.random中的类
:: DeveloperApi :: Generates i.i.d. samples from the Weibull distribution with the given shape and scale parameter.
WeibullGenerator(double, double) - 类 的构造器org.apache.spark.mllib.random.WeibullGenerator
 
weight() - 接口 中的方法org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
Weighted count of instances in this aggregator.
weight() - 接口 中的方法org.apache.spark.scheduler.Schedulable
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassifier
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.GBTClassifier
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.LinearSVC
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.LogisticRegression
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.NaiveBayes
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
weightCol() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassifier
 
weightCol() - 类 中的方法org.apache.spark.ml.clustering.PowerIterationClustering
 
weightCol() - 类 中的方法org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
weightCol() - 类 中的方法org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
weightCol() - 类 中的方法org.apache.spark.ml.evaluation.RegressionEvaluator
 
weightCol() - 接口 中的方法org.apache.spark.ml.param.shared.HasWeightCol
Param for weight column name.
weightCol() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressor
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.GBTRegressor
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegression
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegression
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.LinearRegression
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
weightCol() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressor
 
weightedFalseNegatives() - 接口 中的方法org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
weighted number of false negatives
weightedFalsePositiveRate() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns weighted false positive rate.
weightedFalsePositiveRate() - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
 
weightedFalsePositives() - 接口 中的方法org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
weighted number of false positives
weightedFMeasure(double) - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns weighted averaged f-measure.
weightedFMeasure() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns weighted averaged f1-measure.
weightedFMeasure(double) - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns weighted averaged f-measure
weightedFMeasure() - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
 
weightedNegatives() - 接口 中的方法org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
weighted number of negatives
weightedPositives() - 接口 中的方法org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
weighted number of positives
weightedPrecision() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns weighted averaged precision.
weightedPrecision() - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
 
weightedRecall() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns weighted averaged recall.
weightedRecall() - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
 
weightedTrueNegatives() - 接口 中的方法org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
weighted number of true negatives
weightedTruePositiveRate() - 接口 中的方法org.apache.spark.ml.classification.LogisticRegressionSummary
Returns weighted true positive rate.
weightedTruePositiveRate() - 类 中的方法org.apache.spark.mllib.evaluation.MulticlassMetrics
 
weightedTruePositives() - 接口 中的方法org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
weighted number of true positives
weights() - 接口 中的方法org.apache.spark.ml.ann.LayerModel
 
weights() - 接口 中的方法org.apache.spark.ml.ann.TopologyModel
 
weights() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
weights() - 类 中的方法org.apache.spark.ml.clustering.ExpectationAggregator
 
weights() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
 
weights() - 类 中的方法org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
 
weights() - 类 中的方法org.apache.spark.mllib.classification.LogisticRegressionModel
 
weights() - 类 中的方法org.apache.spark.mllib.classification.SVMModel
 
weights() - 类 中的方法org.apache.spark.mllib.clustering.ExpectationSum
 
weights() - 类 中的方法org.apache.spark.mllib.clustering.GaussianMixtureModel
 
weights() - 类 中的方法org.apache.spark.mllib.regression.GeneralizedLinearModel
 
weights() - 类 中的方法org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
 
weights() - 类 中的方法org.apache.spark.mllib.regression.LassoModel
 
weights() - 类 中的方法org.apache.spark.mllib.regression.LinearRegressionModel
 
weights() - 类 中的方法org.apache.spark.mllib.regression.RidgeRegressionModel
 
weightSize() - 接口 中的方法org.apache.spark.ml.ann.Layer
Number of weights that is used to allocate memory for the weights vector
weightSum() - 接口 中的方法org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
 
weightSum() - 类 中的方法org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Sum of weights.
weightSum() - 接口 中的方法org.apache.spark.mllib.stat.MultivariateStatisticalSummary
Sum of weights.
WelchTTest - org.apache.spark.mllib.stat.test中的类
Performs Welch's 2-sample t-test.
WelchTTest() - 类 的构造器org.apache.spark.mllib.stat.test.WelchTTest
 
when(Column, Object) - 类 中的方法org.apache.spark.sql.Column
Evaluates a list of conditions and returns one of multiple possible result expressions.
when(Column, Object) - 类 中的静态方法org.apache.spark.sql.functions
Evaluates a list of conditions and returns one of multiple possible result expressions.
where(Column) - 类 中的方法org.apache.spark.sql.Dataset
Filters rows using the given condition.
where(String) - 类 中的方法org.apache.spark.sql.Dataset
Filters rows using the given SQL expression.
wholeTextFiles(String, int) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
wholeTextFiles(String) - 类 中的方法org.apache.spark.api.java.JavaSparkContext
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
wholeTextFiles(String, int) - 类 中的方法org.apache.spark.SparkContext
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
width() - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Width of this CountMinSketch.
Window - org.apache.spark.sql.expressions中的类
Utility functions for defining window in DataFrames.
Window() - 类 的构造器org.apache.spark.sql.expressions.Window
 
window(Column, String, String, String) - 类 中的静态方法org.apache.spark.sql.functions
Bucketize rows into one or more time windows given a timestamp specifying column.
window(Column, String, String) - 类 中的静态方法org.apache.spark.sql.functions
Bucketize rows into one or more time windows given a timestamp specifying column.
window(Column, String) - 类 中的静态方法org.apache.spark.sql.functions
Generates tumbling time windows given a timestamp specifying column.
window(Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
Return a new DStream in which each RDD contains all the elements in seen in a sliding window of time over this DStream.
window(Duration, Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
Return a new DStream in which each RDD contains all the elements in seen in a sliding window of time over this DStream.
window(Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream which is computed based on windowed batches of this DStream.
window(Duration, Duration) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream which is computed based on windowed batches of this DStream.
window(Duration) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD contains all the elements in seen in a sliding window of time over this DStream.
window(Duration, Duration) - 类 中的方法org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD contains all the elements in seen in a sliding window of time over this DStream.
windowsDrive() - 类 中的静态方法org.apache.spark.util.Utils
Pattern for matching a Windows drive, which contains only a single alphabet character.
windowSize() - 类 中的方法org.apache.spark.ml.feature.Word2Vec
 
windowSize() - 接口 中的方法org.apache.spark.ml.feature.Word2VecBase
The window size (context words from [-window, window]).
windowSize() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
WindowSpec - org.apache.spark.sql.expressions中的类
A window specification that defines the partitioning, ordering, and frame boundaries.
wipe() - 类 中的方法org.apache.spark.mllib.optimization.NNLS.Workspace
 
withCentering() - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
withCentering() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
withCentering() - 接口 中的方法org.apache.spark.ml.feature.RobustScalerParams
Whether to center the data with median before scaling.
withColumn(String, Column) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset by adding a column or replacing the existing column that has the same name.
withColumnRenamed(String, String) - 类 中的方法org.apache.spark.sql.Dataset
Returns a new Dataset with a column renamed.
withComment(String) - 类 中的方法org.apache.spark.sql.types.StructField
Updates the StructField with a new comment value.
withContextClassLoader(ClassLoader, Function0<T>) - 类 中的静态方法org.apache.spark.util.Utils
Run a segment of code using a different context class loader in the current thread
withDummyCallSite(SparkContext, Function0<T>) - 类 中的静态方法org.apache.spark.util.Utils
To avoid calling Utils.getCallSite for every single RDD we create in the body, set a dummy call site that RDDs use instead.
withEdges(EdgeRDD<?>) - 类 中的方法org.apache.spark.graphx.impl.VertexRDDImpl
 
withEdges(EdgeRDD<?>) - 类 中的方法org.apache.spark.graphx.VertexRDD
Prepares this VertexRDD for efficient joins with the given EdgeRDD.
withExtensions(Function1<SparkSessionExtensions, BoxedUnit>) - 类 中的方法org.apache.spark.sql.SparkSession.Builder
Inject extensions into the SparkSession.
withFitEvent(Estimator<M>, Dataset<?>, Function0<M>) - 接口 中的方法org.apache.spark.ml.MLEvents
 
withHiveExternalCatalog(SparkContext) - 类 中的静态方法org.apache.spark.sql.hive.HiveUtils
 
withHiveState(Function0<A>) - 接口 中的方法org.apache.spark.sql.hive.client.HiveClient
Run a function within Hive state (SessionState, HiveConf, Hive client and class loader)
withIndex(int) - 类 中的方法org.apache.spark.ml.attribute.Attribute
Copy with a new index.
withIndex(int) - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
withIndex(int) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
withIndex(int) - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
withIndex(int) - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
withInputDataSchema(StructType) - 接口 中的方法org.apache.spark.sql.connector.write.WriteBuilder
Passes the schema of the input data from Spark to data source.
withListener(Function1<org.apache.spark.streaming.ui.StreamingJobProgressListener, T>) - 接口 中的方法org.apache.spark.status.api.v1.streaming.BaseStreamingAppResource
 
withListener(SparkContext, L, Function1<L, BoxedUnit>) - 类 中的静态方法org.apache.spark.TestUtils
Runs some code with the given listener installed in the SparkContext.
withLoadInstanceEvent(MLReader<T>, String, Function0<T>) - 接口 中的方法org.apache.spark.ml.MLEvents
 
withMapStatuses(Function1<MapStatus[], T>) - 类 中的方法org.apache.spark.ShuffleStatus
Helper function which provides thread-safe access to the mapStatuses array.
withMax(double) - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
Copy with a new max value.
withMean() - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
withMean() - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
withMean() - 接口 中的方法org.apache.spark.ml.feature.StandardScalerParams
Whether to center the data with mean before scaling.
withMean() - 类 中的方法org.apache.spark.mllib.feature.StandardScalerModel
 
withMetadata(Metadata) - 类 中的方法org.apache.spark.sql.types.MetadataBuilder
Include the content of an existing Metadata instance.
withMin(double) - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
Copy with a new min value.
withName(String) - 类 中的方法org.apache.spark.ml.attribute.Attribute
Copy with a new name.
withName(String) - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
withName(String) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
withName(String) - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
withName(String) - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
withName(String) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.Algo
 
withName(String) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
withName(String) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.FeatureType
 
withName(String) - 类 中的静态方法org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
withName(String) - 类 中的静态方法org.apache.spark.rdd.CheckpointState
 
withName(String) - 类 中的静态方法org.apache.spark.rdd.DeterministicLevel
 
withName(String) - 类 中的静态方法org.apache.spark.scheduler.SchedulingMode
 
withName(String) - 类 中的静态方法org.apache.spark.scheduler.TaskLocality
 
withName(String) - 类 中的方法org.apache.spark.sql.expressions.UserDefinedFunction
Updates UserDefinedFunction with a given name.
withName(String) - 类 中的静态方法org.apache.spark.streaming.scheduler.ReceiverState
 
withName(String) - 类 中的静态方法org.apache.spark.TaskState
 
withNullSafe(Function1<Object, Object>) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
withNumValues(int) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Copy with a new numValues and empty values.
withoutIndex() - 类 中的方法org.apache.spark.ml.attribute.Attribute
Copy without the index.
withoutIndex() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
withoutIndex() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
withoutIndex() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
withoutIndex() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
withoutMax() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
Copy without the max value.
withoutMin() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
Copy without the min value.
withoutName() - 类 中的方法org.apache.spark.ml.attribute.Attribute
Copy without the name.
withoutName() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
 
withoutName() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
 
withoutName() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
 
withoutName() - 类 中的静态方法org.apache.spark.ml.attribute.UnresolvedAttribute
 
withoutNumValues() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Copy without the numValues.
withoutSparsity() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
Copy without the sparsity.
withoutStd() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
Copy without the standard deviation.
withoutSummary() - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
Copy without summary statistics.
withoutValues() - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
Copy without the values.
withoutValues() - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Copy without the values.
withPathFilter(double, SparkSession, long, Function0<T>) - 类 中的静态方法org.apache.spark.ml.image.SamplePathFilter
Sets the HDFS PathFilter flag and then restores it.
withPosition(Option<Object>, Option<Object>) - 异常错误 中的方法org.apache.spark.sql.AnalysisException
 
withQueryId(String) - 接口 中的方法org.apache.spark.sql.connector.write.WriteBuilder
Passes the `queryId` from Spark to data source.
withRecursiveFlag(boolean, SparkSession, Function0<T>) - 类 中的静态方法org.apache.spark.ml.image.RecursiveFlag
Sets the spark recursive flag and then restores it.
withReferences(Seq<NamedReference>) - 接口 中的方法org.apache.spark.sql.connector.expressions.RewritableTransform
Creates a copy of this transform with the new analyzed references.
withResourcesJson(String, Function1<String, Seq<T>>) - 类 中的静态方法org.apache.spark.resource.ResourceUtils
 
withSaveInstanceEvent(MLWriter, String, Function0<BoxedUnit>) - 接口 中的方法org.apache.spark.ml.MLEvents
 
withScaling() - 类 中的方法org.apache.spark.ml.feature.RobustScaler
 
withScaling() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
withScaling() - 接口 中的方法org.apache.spark.ml.feature.RobustScalerParams
Whether to scale the data to quantile range.
withSparkUI(String, Option<String>, Function1<org.apache.spark.ui.SparkUI, T>) - 接口 中的方法org.apache.spark.status.api.v1.UIRoot
Runs some code with the current SparkUI instance for the app / attempt.
withSparsity(double) - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
Copy with a new sparsity.
withStd(double) - 类 中的方法org.apache.spark.ml.attribute.NumericAttribute
Copy with a new standard deviation.
withStd() - 类 中的方法org.apache.spark.ml.feature.StandardScaler
 
withStd() - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
withStd() - 接口 中的方法org.apache.spark.ml.feature.StandardScalerParams
Whether to scale the data to unit standard deviation.
withStd() - 类 中的方法org.apache.spark.mllib.feature.StandardScalerModel
 
withTransformEvent(Transformer, Dataset<?>, Function0<Dataset<Row>>) - 接口 中的方法org.apache.spark.ml.MLEvents
 
withUI(Function1<org.apache.spark.ui.SparkUI, T>) - 接口 中的方法org.apache.spark.status.api.v1.BaseAppResource
 
withValues(String, String) - 类 中的方法org.apache.spark.ml.attribute.BinaryAttribute
Copy with new values.
withValues(String, String...) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Copy with new values and empty numValues.
withValues(String[]) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Copy with new values and empty numValues.
withValues(String, Seq<String>) - 类 中的方法org.apache.spark.ml.attribute.NominalAttribute
Copy with new values and empty numValues.
withWatermark(String, String) - 类 中的方法org.apache.spark.sql.Dataset
Defines an event time watermark for this Dataset.
word() - 类 中的方法org.apache.spark.mllib.feature.VocabWord
 
Word2Vec - org.apache.spark.ml.feature中的类
Word2Vec trains a model of Map(String, Vector), i.e. transforms a word into a code for further natural language processing or machine learning process.
Word2Vec(String) - 类 的构造器org.apache.spark.ml.feature.Word2Vec
 
Word2Vec() - 类 的构造器org.apache.spark.ml.feature.Word2Vec
 
Word2Vec - org.apache.spark.mllib.feature中的类
Word2Vec creates vector representation of words in a text corpus.
Word2Vec() - 类 的构造器org.apache.spark.mllib.feature.Word2Vec
 
Word2VecBase - org.apache.spark.ml.feature中的接口
Params for Word2Vec and Word2VecModel.
Word2VecModel - org.apache.spark.ml.feature中的类
Model fitted by Word2Vec.
Word2VecModel - org.apache.spark.mllib.feature中的类
Word2Vec model param: wordIndex maps each word to an index, which can retrieve the corresponding vector from wordVectors param: wordVectors array of length numWords * vectorSize, vector corresponding to the word mapped with index i can be retrieved by the slice (i * vectorSize, i * vectorSize + vectorSize)
Word2VecModel(Map<String, float[]>) - 类 的构造器org.apache.spark.mllib.feature.Word2VecModel
 
Word2VecModel.Word2VecModelWriter$ - org.apache.spark.ml.feature中的类
 
Word2VecModelWriter$() - 类 的构造器org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
 
Worker - org.apache.spark.internal.config中的类
 
Worker() - 类 的构造器org.apache.spark.internal.config.Worker
 
WORKER() - 类 中的静态方法org.apache.spark.metrics.MetricsSystemInstances
 
WORKER_CLEANUP_ENABLED() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
WORKER_CLEANUP_INTERVAL() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
WORKER_DRIVER_TERMINATE_TIMEOUT() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
WORKER_TIMEOUT() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
WORKER_UI_PORT() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
WORKER_UI_RETAINED_DRIVERS() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
WORKER_UI_RETAINED_EXECUTORS() - 类 中的静态方法org.apache.spark.internal.config.Worker
 
workerId() - 类 中的方法org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
 
workerRemoved(String, String, String) - 接口 中的方法org.apache.spark.scheduler.TaskScheduler
Process a removed worker
Workspace(int) - 类 的构造器org.apache.spark.mllib.optimization.NNLS.Workspace
 
wrap(Object, ObjectInspector, DataType) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
wrap(InternalRow, Function1<Object, Object>[], Object[], DataType[]) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
wrap(Seq<Object>, Function1<Object, Object>[], Object[], DataType[]) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
 
wrap(Object, ObjectInspector, DataType) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
wrap(InternalRow, Function1<Object, Object>[], Object[], DataType[]) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
wrap(Seq<Object>, Function1<Object, Object>[], Object[], DataType[]) - 类 中的静态方法org.apache.spark.sql.hive.orc.OrcFileFormat
 
wrapperClass() - 类 中的静态方法org.apache.spark.serializer.JavaIterableWrapperSerializer
 
wrapperFor(ObjectInspector, DataType) - 接口 中的方法org.apache.spark.sql.hive.HiveInspectors
Wraps with Hive types based on object inspector.
wrapperToFileSinkDesc(HiveShim.ShimFileSinkDesc) - 类 中的静态方法org.apache.spark.sql.hive.HiveShim
 
wrapRDD(RDD<Double>) - 类 中的方法org.apache.spark.api.java.JavaDoubleRDD
 
wrapRDD(RDD<Tuple2<K, V>>) - 类 中的方法org.apache.spark.api.java.JavaPairRDD
 
wrapRDD(RDD<T>) - 类 中的方法org.apache.spark.api.java.JavaRDD
 
wrapRDD(RDD<T>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
 
wrapRDD(RDD<T>) - 类 中的方法org.apache.spark.streaming.api.java.JavaDStream
 
wrapRDD(RDD<T>) - 接口 中的方法org.apache.spark.streaming.api.java.JavaDStreamLike
 
wrapRDD(RDD<Tuple2<K, V>>) - 类 中的方法org.apache.spark.streaming.api.java.JavaPairDStream
 
WritableByteChannelWrapper - org.apache.spark.shuffle.api中的接口
:: Private :: A thin wrapper around a WritableByteChannel.
write(Tuple2<K, V>) - 类 中的方法org.apache.spark.internal.io.HadoopWriteConfigUtil
 
write(RDD<Tuple2<K, V>>, HadoopWriteConfigUtil<K, V>, ClassTag<V>) - 类 中的静态方法org.apache.spark.internal.io.SparkHadoopWriter
Basic work flow of this command is: 1.
write() - 类 中的方法org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
write() - 类 中的方法org.apache.spark.ml.classification.GBTClassificationModel
 
write() - 类 中的方法org.apache.spark.ml.classification.LinearSVCModel
 
write() - 类 中的方法org.apache.spark.ml.classification.LogisticRegressionModel
Returns a MLWriter instance for this ML instance.
write() - 类 中的方法org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
write() - 类 中的方法org.apache.spark.ml.classification.NaiveBayesModel
 
write() - 类 中的方法org.apache.spark.ml.classification.OneVsRest
 
write() - 类 中的方法org.apache.spark.ml.classification.OneVsRestModel
 
write() - 类 中的方法org.apache.spark.ml.classification.RandomForestClassificationModel
 
write() - 类 中的方法org.apache.spark.ml.clustering.BisectingKMeansModel
 
write() - 类 中的方法org.apache.spark.ml.clustering.DistributedLDAModel
 
write() - 类 中的方法org.apache.spark.ml.clustering.GaussianMixtureModel
Returns a MLWriter instance for this ML instance.
write(String, SparkSession, Map<String, String>, PipelineStage) - 类 中的方法org.apache.spark.ml.clustering.InternalKMeansModelWriter
 
write() - 类 中的方法org.apache.spark.ml.clustering.KMeansModel
Returns a GeneralMLWriter instance for this ML instance.
write() - 类 中的方法org.apache.spark.ml.clustering.LocalLDAModel
 
write(String, SparkSession, Map<String, String>, PipelineStage) - 类 中的方法org.apache.spark.ml.clustering.PMMLKMeansModelWriter
 
write() - 类 中的方法org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
write() - 类 中的方法org.apache.spark.ml.feature.ChiSqSelectorModel
 
write() - 类 中的方法org.apache.spark.ml.feature.ColumnPruner
 
write() - 类 中的方法org.apache.spark.ml.feature.CountVectorizerModel
 
write() - 类 中的方法org.apache.spark.ml.feature.IDFModel
 
write() - 类 中的方法org.apache.spark.ml.feature.ImputerModel
 
write() - 类 中的方法org.apache.spark.ml.feature.MaxAbsScalerModel
 
write() - 类 中的方法org.apache.spark.ml.feature.MinHashLSHModel
 
write() - 类 中的方法org.apache.spark.ml.feature.MinMaxScalerModel
 
write() - 类 中的方法org.apache.spark.ml.feature.OneHotEncoderModel
 
write() - 类 中的方法org.apache.spark.ml.feature.PCAModel
 
write() - 类 中的方法org.apache.spark.ml.feature.RFormulaModel
 
write() - 类 中的方法org.apache.spark.ml.feature.RobustScalerModel
 
write() - 类 中的方法org.apache.spark.ml.feature.StandardScalerModel
 
write() - 类 中的方法org.apache.spark.ml.feature.StringIndexerModel
 
write() - 类 中的方法org.apache.spark.ml.feature.VectorAttributeRewriter
 
write() - 类 中的方法org.apache.spark.ml.feature.VectorIndexerModel
 
write() - 类 中的方法org.apache.spark.ml.feature.Word2VecModel
 
write() - 类 中的方法org.apache.spark.ml.fpm.FPGrowthModel
 
write() - 类 中的方法org.apache.spark.ml.Pipeline
 
write() - 类 中的方法org.apache.spark.ml.PipelineModel
 
write() - 类 中的方法org.apache.spark.ml.recommendation.ALSModel
 
write() - 类 中的方法org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
write() - 类 中的方法org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
write() - 类 中的方法org.apache.spark.ml.regression.GBTRegressionModel
 
write() - 类 中的方法org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
Returns a MLWriter instance for this ML instance.
write(String, SparkSession, Map<String, String>, PipelineStage) - 类 中的方法org.apache.spark.ml.regression.InternalLinearRegressionModelWriter
 
write() - 类 中的方法org.apache.spark.ml.regression.IsotonicRegressionModel
 
write() - 类 中的方法org.apache.spark.ml.regression.LinearRegressionModel
Returns a GeneralMLWriter instance for this ML instance.
write(String, SparkSession, Map<String, String>, PipelineStage) - 类 中的方法org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter
 
write() - 类 中的方法org.apache.spark.ml.regression.RandomForestRegressionModel
 
write() - 类 中的方法org.apache.spark.ml.tuning.CrossValidator
 
write() - 类 中的方法org.apache.spark.ml.tuning.CrossValidatorModel
 
write() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplit
 
write() - 类 中的方法org.apache.spark.ml.tuning.TrainValidationSplitModel
 
write() - 接口 中的方法org.apache.spark.ml.util.DefaultParamsWritable
 
write() - 接口 中的方法org.apache.spark.ml.util.GeneralMLWritable
Returns an MLWriter instance for this ML instance.
write() - 接口 中的方法org.apache.spark.ml.util.MLWritable
Returns an MLWriter instance for this ML instance.
write(String, SparkSession, Map<String, String>, PipelineStage) - 接口 中的方法org.apache.spark.ml.util.MLWriterFormat
Function to write the provided pipeline stage out.
write(Kryo, Output, Iterable<?>) - 类 中的方法org.apache.spark.serializer.JavaIterableWrapperSerializer
 
write(T) - 接口 中的方法org.apache.spark.sql.connector.write.DataWriter
Writes one record.
write() - 类 中的方法org.apache.spark.sql.Dataset
Interface for saving the content of the non-streaming Dataset out into external storage.
write(InternalRow) - 类 中的方法org.apache.spark.sql.hive.execution.HiveOutputWriter
 
write(ByteBuffer) - 类 中的方法org.apache.spark.storage.CountingWritableChannel
 
write(int) - 类 中的方法org.apache.spark.storage.TimeTrackingOutputStream
 
write(byte[]) - 类 中的方法org.apache.spark.storage.TimeTrackingOutputStream
 
write(byte[], int, int) - 类 中的方法org.apache.spark.storage.TimeTrackingOutputStream
 
write(ByteBuffer, long) - 类 中的方法org.apache.spark.streaming.util.WriteAheadLog
Write the record to the log and return a record handle, which contains all the information necessary to read back the written record.
WRITE_TIME() - 类 中的方法org.apache.spark.InternalAccumulator.shuffleWrite$
 
WriteAheadLog - org.apache.spark.streaming.util中的类
:: DeveloperApi :: This abstract class represents a write ahead log (aka journal) that is used by Spark Streaming to save the received data (by receivers) and associated metadata to a reliable storage, so that they can be recovered after driver failures.
WriteAheadLog() - 类 的构造器org.apache.spark.streaming.util.WriteAheadLog
 
WriteAheadLogRecordHandle - org.apache.spark.streaming.util中的类
:: DeveloperApi :: This abstract class represents a handle that refers to a record written in a WriteAheadLog.
WriteAheadLogRecordHandle() - 类 的构造器org.apache.spark.streaming.util.WriteAheadLogRecordHandle
 
WriteAheadLogUtils - org.apache.spark.streaming.util中的类
A helper class with utility functions related to the WriteAheadLog interface
WriteAheadLogUtils() - 类 的构造器org.apache.spark.streaming.util.WriteAheadLogUtils
 
writeAll(Iterator<T>, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.SerializationStream
 
writeBoolean(DataOutputStream, boolean) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeBooleanArr(DataOutputStream, boolean[]) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
WriteBuilder - org.apache.spark.sql.connector.write中的接口
An interface for building the BatchWrite.
writeByteBuffer(ByteBuffer, DataOutput) - 类 中的静态方法org.apache.spark.util.Utils
Primitive often used when writing ByteBuffer to DataOutput
writeByteBuffer(ByteBuffer, OutputStream) - 类 中的静态方法org.apache.spark.util.Utils
Primitive often used when writing ByteBuffer to OutputStream
writeBytes(DataOutputStream, byte[]) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeBytes() - 类 中的方法org.apache.spark.status.api.v1.ShuffleWriteMetricDistributions
 
WriteConfigMethods<R> - org.apache.spark.sql中的接口
Configuration methods common to create/replace operations and insert/overwrite operations.
writeDate(DataOutputStream, Date) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeDouble(DataOutputStream, double) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeDoubleArr(DataOutputStream, double[]) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeEventLogs(String, Option<String>, ZipOutputStream) - 接口 中的方法org.apache.spark.status.api.v1.UIRoot
Write the event logs for the given app to the ZipOutputStream instance.
writeExternal(ObjectOutput) - 类 中的方法org.apache.spark.serializer.JavaSerializer
 
writeExternal(ObjectOutput) - 类 中的方法org.apache.spark.storage.BlockManagerId
 
writeExternal(ObjectOutput) - 类 中的方法org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
writeExternal(ObjectOutput) - 类 中的方法org.apache.spark.storage.StorageLevel
 
writeInt(DataOutputStream, int) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeIntArr(DataOutputStream, int[]) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeJObj(DataOutputStream, Object, JVMObjectTracker) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeKey(T, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.SerializationStream
Writes the object representing the key of a key-value pair.
writeObject(DataOutputStream, Object, JVMObjectTracker) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeObject(T, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.SerializationStream
The most general-purpose method to write an object.
writer() - 类 中的方法org.apache.spark.ml.SaveInstanceEnd
 
writer() - 类 中的方法org.apache.spark.ml.SaveInstanceStart
 
WriterCommitMessage - org.apache.spark.sql.connector.write中的接口
A commit message returned by DataWriter.commit() and will be sent back to the driver side as the input parameter of BatchWrite.commit(WriterCommitMessage[]) or StreamingWrite.commit(long, WriterCommitMessage[]).
writeRecords() - 类 中的方法org.apache.spark.status.api.v1.ShuffleWriteMetricDistributions
 
writeSqlObject(DataOutputStream, Object) - 类 中的静态方法org.apache.spark.sql.api.r.SQLUtils
 
writeStream() - 类 中的方法org.apache.spark.sql.Dataset
Interface for saving the content of the streaming Dataset out into external storage.
writeString(DataOutputStream, String) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeStringArr(DataOutputStream, String[]) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeTime(DataOutputStream, Time) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeTime(DataOutputStream, Timestamp) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeTime() - 类 中的方法org.apache.spark.status.api.v1.ShuffleWriteMetricDistributions
 
writeTime() - 类 中的方法org.apache.spark.status.api.v1.ShuffleWriteMetrics
 
writeTo(String) - 类 中的方法org.apache.spark.sql.Dataset
Create a write configuration builder for v2 sources.
writeTo(OutputStream) - 类 中的方法org.apache.spark.util.sketch.BloomFilter
Writes out this BloomFilter to an output stream in binary format.
writeTo(OutputStream) - 类 中的方法org.apache.spark.util.sketch.CountMinSketch
Writes out this CountMinSketch to an output stream in binary format.
writeType(DataOutputStream, String) - 类 中的静态方法org.apache.spark.api.r.SerDe
 
writeValue(T, ClassTag<T>) - 类 中的方法org.apache.spark.serializer.SerializationStream
Writes the object representing the value of a key-value pair.
writingCommandClassName() - 接口 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectBase
 
writingCommandClassName() - 类 中的方法org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
 
writingCommandClassName() - 类 中的方法org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectCommand
 

X

x() - 类 中的方法org.apache.spark.mllib.optimization.NNLS.Workspace
 
XssSafeRequest - org.apache.spark.ui中的类
 
XssSafeRequest(HttpServletRequest, String) - 类 的构造器org.apache.spark.ui.XssSafeRequest
 
xxhash64(Column...) - 类 中的静态方法org.apache.spark.sql.functions
Calculates the hash code of given columns using the 64-bit variant of the xxHash algorithm, and returns the result as a long column.
xxhash64(Seq<Column>) - 类 中的静态方法org.apache.spark.sql.functions
Calculates the hash code of given columns using the 64-bit variant of the xxHash algorithm, and returns the result as a long column.

Y

year(Column) - 类 中的静态方法org.apache.spark.sql.functions
Extracts the year as an integer from a given date/timestamp/string.
years(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.Expressions
Create a yearly transform for a timestamp or date column.
years(String) - 类 中的静态方法org.apache.spark.sql.connector.expressions.LogicalExpressions
 
years(Column) - 类 中的静态方法org.apache.spark.sql.functions
A transform for timestamps and dates to partition data into years.

Z

zero() - 类 中的方法org.apache.spark.ml.feature.StringIndexerAggregator
 
zero(int, int) - 类 中的静态方法org.apache.spark.mllib.clustering.ExpectationSum
 
zero() - 类 中的方法org.apache.spark.sql.expressions.Aggregator
A zero value for this aggregation.
zero() - 类 中的静态方法org.apache.spark.sql.types.ByteExactNumeric
 
zero() - 类 中的静态方法org.apache.spark.sql.types.DecimalExactNumeric
 
zero() - 类 中的静态方法org.apache.spark.sql.types.DoubleExactNumeric
 
zero() - 类 中的静态方法org.apache.spark.sql.types.FloatExactNumeric
 
zero() - 类 中的静态方法org.apache.spark.sql.types.IntegerExactNumeric
 
zero() - 类 中的静态方法org.apache.spark.sql.types.LongExactNumeric
 
zero() - 类 中的静态方法org.apache.spark.sql.types.ShortExactNumeric
 
zeros(int, int) - 类 中的静态方法org.apache.spark.ml.linalg.DenseMatrix
Generate a DenseMatrix consisting of zeros.
zeros(int, int) - 类 中的静态方法org.apache.spark.ml.linalg.Matrices
Generate a Matrix consisting of zeros.
zeros(int) - 类 中的静态方法org.apache.spark.ml.linalg.Vectors
Creates a vector of all zeros.
zeros(int, int) - 类 中的静态方法org.apache.spark.mllib.linalg.DenseMatrix
Generate a DenseMatrix consisting of zeros.
zeros(int, int) - 类 中的静态方法org.apache.spark.mllib.linalg.Matrices
Generate a Matrix consisting of zeros.
zeros(int) - 类 中的静态方法org.apache.spark.mllib.linalg.Vectors
Creates a vector of all zeros.
zip(JavaRDDLike<U, ?>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc.
zip(RDD<U>, ClassTag<U>) - 类 中的方法org.apache.spark.rdd.RDD
Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc.
zip_with(Column, Column, Function2<Column, Column, Column>) - 类 中的静态方法org.apache.spark.sql.functions
Merge two given arrays, element-wise, into a signle array using a function.
zipPartitions(JavaRDDLike<U, ?>, FlatMapFunction2<Iterator<T>, Iterator<U>, V>) - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions.
zipPartitions(RDD<B>, boolean, Function2<Iterator<T>, Iterator<B>, Iterator<V>>, ClassTag<B>, ClassTag<V>) - 类 中的方法org.apache.spark.rdd.RDD
Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions.
zipPartitions(RDD<B>, Function2<Iterator<T>, Iterator<B>, Iterator<V>>, ClassTag<B>, ClassTag<V>) - 类 中的方法org.apache.spark.rdd.RDD
 
zipPartitions(RDD<B>, RDD<C>, boolean, Function3<Iterator<T>, Iterator<B>, Iterator<C>, Iterator<V>>, ClassTag<B>, ClassTag<C>, ClassTag<V>) - 类 中的方法org.apache.spark.rdd.RDD
 
zipPartitions(RDD<B>, RDD<C>, Function3<Iterator<T>, Iterator<B>, Iterator<C>, Iterator<V>>, ClassTag<B>, ClassTag<C>, ClassTag<V>) - 类 中的方法org.apache.spark.rdd.RDD
 
zipPartitions(RDD<B>, RDD<C>, RDD<D>, boolean, Function4<Iterator<T>, Iterator<B>, Iterator<C>, Iterator<D>, Iterator<V>>, ClassTag<B>, ClassTag<C>, ClassTag<D>, ClassTag<V>) - 类 中的方法org.apache.spark.rdd.RDD
 
zipPartitions(RDD<B>, RDD<C>, RDD<D>, Function4<Iterator<T>, Iterator<B>, Iterator<C>, Iterator<D>, Iterator<V>>, ClassTag<B>, ClassTag<C>, ClassTag<D>, ClassTag<V>) - 类 中的方法org.apache.spark.rdd.RDD
 
zipWithIndex() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Zips this RDD with its element indices.
zipWithIndex() - 类 中的方法org.apache.spark.rdd.RDD
Zips this RDD with its element indices.
zipWithUniqueId() - 接口 中的方法org.apache.spark.api.java.JavaRDDLike
Zips this RDD with generated unique Long ids.
zipWithUniqueId() - 类 中的方法org.apache.spark.rdd.RDD
Zips this RDD with generated unique Long ids.
ZOOKEEPER_DIRECTORY() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
ZOOKEEPER_URL() - 类 中的静态方法org.apache.spark.internal.config.Deploy
 
ZStdCompressionCodec - org.apache.spark.io中的类
:: DeveloperApi :: ZStandard implementation of CompressionCodec.
ZStdCompressionCodec(SparkConf) - 类 的构造器org.apache.spark.io.ZStdCompressionCodec
 

_

_1() - 类 中的方法org.apache.spark.util.MutablePair
 
_2() - 类 中的方法org.apache.spark.util.MutablePair
 
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _